As much as the idea of ‘machines’ can evoke a sense of removing human involvement, artificial intelligence (AI) and machine learning (ML) algorithms are not created in a vacuum. These algorithms are created by people.

By Julie Kae, vice-president: sustainability and DE&I, and executive director of Qlik.org

Furthermore, people play a large role in the selection and analysis of the data sources that go into AI and ML algorithms. This means that biases can appear and could then influence algorithms to reinforce and accentuate them.

It gets especially sticky when thinking about this in terms of HR and diversity, equity and inclusion (DE&I) initiatives like hiring and other employment decisions internally.

An early and well-known example is Amazon’s AI recruiting tool, which was scraped after it unfairly preferred men over women in talent evaluation screenings based on faulty historical data.

When looking to leverage AI in these areas, one of an enterprises’ best weapons against it is, you guessed it, humans. Human involvement throughout an enterprise’s AI journey can ensure that trusted, high-quality data is used from the start, making sure that what is fed to any AI algorithm is balanced for accurate insights and avoiding biases.

Why biases occur in AI

AI and ML algorithms run on data, but it is people who write the algorithms, choose the data used by the models and decide how to apply the results that AI and ML produce.

The training data used for the technology can include everything from historical social inequities to undetectably biased human decisions.

Without enough checks and balances, it is surprisingly easy to let subtle, and sometimes unconscious, biases seep into AI and ML.

And, once the algorithm embeds that data into its language model, it’s much harder to segment and remove those elements, throwing every previous output into question.

Regulations such as the US Equal Employment Opportunity Commission (EEOC) Artificial Intelligence and Algorithmic Fairness Initiative aim to ensure that technology like AI, when used in employment decisions, complies with federal anti-discrimination laws already in place.

This guidance puts the onus on the employer to understand how their AI and ML algorithms work. It is the organisation’s responsibility to ensure they are not violating regulations and that employment decisions, such as hiring, monitoring performance, and determining pay or promotions, comply with federal anti-discrimination law.

Human intervention is crucial in this process. According to Deloitte’s ‘State of AI in the Enterprise’, managing AI-related risk was one of the big reasons that companies are struggling to achieve meaningful AI outcomes.

Roles like AI trainer, AI auditor and AI ethicist have emerged to ensure that AI and ML behave in appropriate and ethical ways. Though it is great to have a person squarely focused and leading this charge, it will be a team effort to ensure trustworthy AI and ML is developed now that the technology is a strategic business initiative.

The importance of data quality and lineage

There is a spectrum of AI in use throughout businesses today, and data quality is vitally important to every single AI project. Blindly trusting in data is a hurdle that organisations have been working to overcome for years.

As the field of data science has grown and leaders realised the enormous impact that data and analytics can have on business, data quality and lineage were put under a microscope – a strategic nut to crack in order to get to better decision-making.

Today, leaders must recognise that data quality and lineage play an even larger role in the success of AI.

An established data management process run by data stewards (aka humans), is a team effort that will ensure your data is high quality, your decision-making is grounded in real and quantifiable data and that your AI provides results that can be trusted.

Humans’ forever role in the AI journey

AI does not exist to replace people, but rather evolve human potential and how work is done. AI does not actually understand or evaluate information, its results are based on what humans provide it and how its algorithms are designed.

By really understanding that on every level, humans lead and AI assists, organisations will avoid the HR and DE&I pitfalls that can occur with AI.