Zephyrnet Logo

Solving the Problem of Bias in Artificial Intelligence

Date:

ITRex Hacker Noon profile picture

@itrexITRex

Emerging Tech Development & Consulting: Artificial Intelligence. Advanced Analytics. Machine Learning. Big Data. Cloud

Back in 2018, the American Civil Liberties Union found out that Amazon’s Rekognition, face surveillance technology used by police and courting departments across the US, shows AI bias. During the test, the software incorrectly matched 28 members of Congress with the mugshots of people who have been arrested for committing a crime, and 40% of the false matches were people of color.

Following mass protests wherein Amazon’s employees refused to contribute to AI tools that reproduce facial recognition bias, the tech giant has announced a one-year moratorium on law enforcement agencies using the platform.

The incident has stirred new debate about bias in artificial intelligence algorithms and made companies search for new solutions to the AI bias paradox.

In this article, we’ll dot the i’s, zooming in on the concept, root causes, types, and ethical implications of AI bias, as well as list practical debiasing techniques shared by our AI consultants that worth including in your AI strategy.

But let’s start with the basics.

What is AI bias, and why does it occur?

A simple definition of AI bias could sound like that: an anomaly in the output of AI algorithms.

Bias in artificial intelligence can take many forms — from racial bias and gender prejudice to recruiting inequity and age discrimination. The underlying reason for AI bias is human misbeliefs, either conscious or unconscious, lurking into AI algorithms at different stages of their development. So, an AI solution adopts and scales the prejudiced assumptions of the human brain, both individual and societal.

One potential source of this issue is prejudiced hypotheses made when designing AI models or algorithmic bias. Psychologists claim there’re about 180 cognitive biases, some of which may find their way into hypotheses and influence how AI algorithms are designed.

An example of algorithmic AI bias could be assuming that a model would automatically be less biased when not given access to protected classes, say, race. In reality, removing the protected classes from the analysis doesn’t erase racial bias from AI algorithms. The model could still produce prejudiced results relying on related non-protected factors, for example, geographic data — the phenomenon is known as proxy discrimination.

Another common reason for replicating AI bias is the low quality of data on which AI models are trained. The training data may incorporate human decisions or echo societal or historical inequities.

For instance, if an employer uses an AI-based recruiting tool trained on historical employee data in a predominantly male industry, chances are AI would replicate gender bias.

The same applies to natural language processing algorithms. When learning on real-world data, like news reports or social media posts, AI is likely to show language bias and reinforce the existing prejudices. This is what happened with Google Translate, which tends to be biased against women when translating from languages with gender-neutral pronouns. The AI engine powering the app is more likely to generate such translations as “he invests” and “she takes care of the children” than vice versa.

AI bias can stem from the way training data is collected and processed as well. The mistakes data scientists may fall prey to range from excluding valuable entries to inconsistent labeling to under- and oversampling. Undersampling, for example, can cause skews in class distribution and make AI models ignore minority classes completely.

Oversampling, in turn, may lead to the over-representation of certain groups or factors in the training datasets. For instance, crimes committed in locations frequented by the police are more likely to be recorded in the training dataset simply because that is where the police patrol. Consequently, the algorithms trained on such data are likely to reflect this disproportion.

A no less important source of AI bias is the feedback of real-world users interacting with AI models. People may reinforce bias baked in already deployed AI models, often without realizing it. For example, a credit card company may use an AI algorithm that mildly reflects social bias to advertise their products, targeting less-educated people with offers featuring higher interest rates. These people may find themselves clicking on these types of ads without knowing that other social groups are shown better offers, thus, scaling the existing bias.

What are the four common types of bias in artificial intelligence?

The most common classification of bias in artificial intelligence takes the source of prejudice as the base criterion, putting AI biases into three categories — algorithmic, data, and human. Still, AI researchers and practitioners urge to look out for the latter as human bias underlies and outweighs the other two. Here’re the most common types of AI bias that creep into the algorithms.

1. Reporting bias

This type of AI bias arises when the frequency of events in the training dataset doesn’t accurately reflect reality. Take an example of a customer fraud detection tool that underperformed in a remote geographic region, marking all customers living in the area with a falsely high fraud score.

It turned out that the training dataset the tool was relying on claimed every historical investigation in the region as a fraud case. The reason was that because of the region’s remoteness, fraud case investigators wanted to make sure every new claim is indeed fraudulent before they travel to the area. So, the frequency of fraudulent events in the training dataset was way higher than it should have been in reality.

2. Selection bias

This type of AI bias occurs if training data is either unrepresentative or is selected without proper randomization. An example of the selection bias is well illustrated by the research conducted by Joy Buolamwini, Timnit Gebru, and Deborah Raji, where they looked at three commercial image recognition products. The tools were to classify 1,270 images of parliament members from European and African countries. The study found that all three tools performed better on male than female faces and showed more substantial bias against darker-skin females, failing on over one in three women of color — all due to the lack of diversity in training data.

3. Group attribution bias

Group attribution bias takes place when data teams extrapolate what is true of individuals to entire groups the individual is or is not part of. This type of AI bias can be found in admission and recruiting tools that may favor the candidates who graduated from certain schools and show prejudice against those who didn’t.

4. Implicit bias

This type of AI bias occurs when AI assumptions are made based on personal experience that doesn’t necessarily apply more generally. For instance, if data scientists have picked up on cultural cues about women being housekeepers, they might struggle to connect women to influential roles in business despite their conscious belief in gender equality — an example echoing the story of Google Images’ gender bias.

Why should businesses engage in solving the AI bias problem?

With the growing use of AI in sensitive areas, including finances, criminal justice, and healthcare, we should strive to develop algorithms that are fair to everyone. Businesses, too, have to work on reducing bias in AI systems.

The most apparent reason to hone a corporate debiasing strategy is that a mere idea of an AI algorithm being prejudiced can turn customers away from a product or service a company offers and jeopardize the company’s reputation. On the flip side, relying on an AI solution that performs accurately for the whole spectrum of genders, races, ages, and cultural backgrounds is much more likely to deliver superior value and appeal to a broader and more diverse pool of potential customers.

Another point that could motivate businesses to dedicate themselves to overcoming AI bias is the growing debate about AI regulations. Policymakers in the EU, for example, are starting to develop solutions that could help keep bias in artificial intelligence under control. Certifying AI vendors could be one of such solutions. And along with regulating the inclusiveness of AI algorithms, obtaining an AI certification could help tech enterprises stand out in the saturated marketplaces.

How to reduce bias in machine learning algorithms

Solving the problem of bias in artificial intelligence requires collaboration between tech industry players, policymakers, and social scientists. And the tech industry has a long way to go before it could eliminate AI bias. Still, there are practical steps companies can take today to make sure the algorithms they develop foster equality and inclusion.

1. Examine the context. Some industries and use cases are more prone to AI bias and have a previous record of relying on biased systems. Being aware of where AI has struggled in the past can help companies improve fairness, building on the industry experience.

2. Design AI models with inclusion in mind. Before actually designing AI algorithms, it makes sense to engage with humanists and social scientists to ensure that the models you create don’t inherit bias present in human judgment. Also, set measurable goals for the AI models to perform equally well across planned use cases, for instance, for several different age groups.

3. Train your AI models on complete and representative data. That would require establishing procedures and guidelines on how to collect, sample, and preprocess training data. Along with establishing transparent data processes, you may involve internal or external teams to spot discriminatory correlations and potential sources of AI bias in the training datasets.

4. Perform targeted testing. While testing your models, examine AI’s performance across different subgroups to uncover problems that can be masked by aggregate metrics. Also, perform a set of stress tests to check how the model performs on complex cases. In addition, continuously retest your models as you gain more real-life data and get feedback from users.

5. Hone human decisions. AI can help reveal inaccuracies present in human decision-making. So, if AI models trained on recent human decisions or behavior show bias, be ready to consider how human-driven processes might be improved in the future.

6. Improve AI explainability. Additionally, keep in mind the adjacent issue of AI explainability: understanding how AI generates predictions and what features of the data it uses to make decisions. Understanding whether the factors supporting the decision reflect AI bias can help in identifying and mitigating prejudice.

The trends in tacking AI bias

Tech leaders across the globe are taking steps to reduce AI bias. And leveling out the demographics working on AI is one of their priorities. Intel, for example, is taking steps in improving diversity in the company’s technical positions. Recent data shows that women make up 24% of the company’s AI developers, ten percentage points higher than the industry average.

Google has also rolled out AI debiasing initiatives, including responsible AI practices featuring advice on making AI algorithms fairer. At the same time, AI4ALL, a nonprofit dedicated to increasing diversity and inclusion in AI education, research, and development, breeds new talent for the AI development sector.

Other industry efforts focus on encouraging assessment and audit to test algorithms’ fairness before AI systems go live and promote legal frameworks and tools that can help tackle AI bias.

If you want to develop an AI solution that is bias-free, contact the ITRex team, and we’ll connect you with our AI experts.

Also published on https://itrexgroup.com/blog/ai-bias-definition-types-examples-debiasing-strategies/.

by ITRex @itrex. Emerging Tech Development & Consulting: Artificial Intelligence. Advanced Analytics. Machine Learning. Big Data. CloudBring us your challenge!

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://hackernoon.com/solving-the-problem-of-bias-in-artificial-intelligence-e6x354s?source=rss

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?