Understanding bias in artificial intelligence

Understanding bias in artificial intelligence

Artificial Intelligence (AI) aims to learn how to do things that people are good at. Humans are great at stringing together complex concepts and harnessing them towards a goal. With that in mind, we’re also moulded by our culture, upbringing, and history. This introduces bias in all humans.

AI systems are only as good as the data we use to mould them. This all stems from people deciding what data is captured, how it is captured, and to an extent, how it is interpreted. With the upsurge of AI in human-centric areas, such as hiring, finance, and criminal justice, society has started to wrestle with just how much human biases can make their way into these systems.

How AI bias happens
Bad data can contain implicit racial, gender, or ideological biases, and many AI systems will continue to be trained using biased or flawed data, making this an ongoing problem. AI bias is often explained by a poor understanding of the domain, data, and how it’s being used. The reality is, bias can creep in long before the data is collected, as well as at many other stages of the machine learning process. Some of these stages include:

1. The data collection stage
There are two main ways that bias shows up in training data: either the data collected is not an accurate representation of reality, or it reflects existing prejudices. If a deep-learning algorithm receives more photos of light-skinned faces versus darker-skinned , a facial recognition system would inevitably be worse at recognising darker-skinned faces since it has few examples to learn from. Addressing the second case would be Amazon’s internal recruiting tool that dismissed female candidates. Because the system was trained on historical hiring decisions, which favoured men over women , it learned to do the same.

2. The data preparations stage
The data preparation stage involves selecting which data or attributes of the data can be engineered into features you want the algorithm to consider when learning. Examples of “attributes” could be a customer’s age, location, income, gender, education level or years of experience. Choosing which attributes to consider or ignore can significantly influence a model’s prediction accuracy but also introduce bias. Individuals from previously disadvantaged backgrounds may be discriminated against due to a combination of attributes without intentionally doing so.

Why AI bias is not an easy fix
Because AI is not a brand-new discovery, it is often believed that most problems should have been ironed out by now. But due to the complexity of AI biases and ever-changing technology and social landscapes, it’s not an easy fix. The truth is – you don’t know what you don’t know. This is probably the biggest complexity organisations have to deal with when trying to identify and solve biases in systems.

The introduction of bias isn’t always obvious during a model’s construction because you may not realise the downstream impacts of your data and choices until much later. Once biases are discovered, it’s difficult to identify where it came from and how to get rid of it. Similarly, the way in which engineers and scientists are taught to frame problems often isn’t compatible with the best way to think about social problems. It’s also not clear what the absence of bias should look like.

Most foundational laws and regulations were established in a different time, where the digital world didn’t exist. There have been instances of governing bodies adapting laws to try battle AI biases and information privacy, but it’s not always easy to keep up with rapidly advancing technologies. Laws and regulations struggle to keep up with the pace of new technologies and their uses that are constantly erupting and evolving.

Where to from here?
Fortunately, a strong group of AI researchers are working hard to address the problem. They’ve taken a variety of approaches, such as algorithms that help detect and mitigate the biases learned by the model regardless of the data quality. Since bias in AI has come to the fore in recent years, more influential figures, organisations and political bodies are taking a serious look at how to deal with the problem.

AI has many potential benefits for businesses, the economy, and tackling society’s most pressing social challenges, including the impact of human biases and emotional decisions that have high global impact. But that will only be possible if people trust these systems to produce reliable, unbiased results.

Related Articles

AI pair programmer:
a hands-on analysis
Rapid payment infrastructure
for financial inclusion