Generative Data Intelligence

Ethics, Fairness, and Bias in AI

Date:

Ethics, Fairness, and Bias in AI

As more AI-enhanced applications seep into our daily lives and expand their reach to larger swaths of populations around the world, we must clearly understand the vulnerabilities trained machine leaning models can exhibit based on the data used during development. Such issues can negatively impact select groups of people, so addressing the ethical decisions made by AI–possibly unknowingly–is important to the long-term fairness and success of this new technology.


By Aditya Aggarwal, Head of Data Science (Practice Lead) at Abzooba.

Today, AI is getting adopted in everyday life, and now it is more important to ensure that decisions that have been taken using AI are not reflecting discriminatory behaviour towards a set of populations. It is important to take fairness into consideration while consuming the output from AI. Discrimination towards a sub-population can be created unintentionally and unknowingly, but during the deployment of any AI solution, a check on bias is imperative.

Real examples explaining the impact on a sub-population that gets discriminated against due to bias in the AI model

Example 1: Machine learned human biases that result in a model with racial disparity. [1]

In the United States, amongst the population sent to lock-up include blacks in disproportionate number. For centuries, the key decisions in the legal process are governed by human instincts and biases. When machines learned from this data to do the risk assessment score of future crime, it propagated the human bias in the models, and as a result, the model was found to have racial disparities. The algorithm makes mistakes with both black and white at the same rate but in different ways, as shown in Fig 1 and Fig 2.

  1. It assigns high-risk scores more to black defendants that results in a lot of false positives. However,
  2. It assigns lower risk score more often to white defendants that turns out with a lot of false negatives.

These risk assessment scores, along with other factors, are being used by judges to suggest a treatment plan for defendants, deciding the term of sentence, etc. So, to be fair and ethical to everyone, bias should be analysed and mitigated for factors like race, skin colour, and nationality.

Example 2: Machine learned from historical biases that make the company appear racist while defining same-day delivery areas. [2]

In the United States, Amazon decided to expand same-day delivery areas based on various factors without being racist intentionally. However, due to historical biases against African-Americans were created due to the National Housing Act of 1934, where African-Americans were excluded from getting mortgages, and, in turn, they got cut from getting into affluent neighbourhoods. This creates pockets of discrimination, and with time, discrimination continues to reproduce.

Factors that Amazon may have used to get the best yield from the same-day delivery program are not directly using race as a deciding factor, but the systematic racism must have affected the model outcome to choose such areas. For example, as shown in Fig 3, the northern half of Atlanta, home to 96% of the city’s white residents, has same-day delivery. However, southern Atlanta, where 90% of the residents are black, is excluded from same-day delivery.

Takeaway: It is important to check model bias on sensitive factors before deploying it in production.

What are the sources of bias?

We saw in the above examples that machine learning models developed a bias towards a certain section of the population due to human bias or historical bias present in training datasets. Likewise, there are multiple ways by which such biases can seep into the model.

​​​​​​​Models learn from biased data and make decisions, and these decisions affect the future data that gets used to subsequent model training. In this loop, bias keeps on propagating and gets enlarged over time.

Due to the propagation of biases into the machine learning feedback loop, bias for a specific population enlarge with time. As a result, AI can deepen social inequality. [4,5]

In Fig 4, biases are grouped on each arrow showing where they are most prevalent. Users can study the source of bias and try to address them most effectively at the source.

Fig 4: Due to the feedback loop, biases keep propagating and get enlarged with time.

There are many ways by which bias can enter into the machine learning models. To consider a few examples:

Bias generated due to the algorithm

Popularity Bias: Most of the recommender systems suffer from this bias [5], i.e., the most popular items are recommended frequently, but less popular items are recommended rarely by the recommendation algorithm. However, for businesses, recommending rarely bought items is very important as they are less likely to be discovered.

Biases due to user interactions

Behavioural Bias: A bank trained a model to predict the ability of the applicant to repay the loan using the applicant’s financial history, employment history, and demographic information. They used historical bank loan decision data. However, in this approach, bias arose because loan managers denied loans unfairlyto minority groups in the past, so the AI considered these groups’ general repayment ability to be lower. 

Bias generated due to data

Representation Bias: Most of the images in the ImageNet dataset are from a western country. So when a model is trained on this data, it predicts well on western country images than from African or Asian country images. When this model is used on the African or Asian origin population, it suffers from representation bias and gives poor accuracy.

Sample bias: If a model is trained on data where sample size for a specific population is not enough, then the model won’t be able to predict outcomes accurately for that specific population.

Intuition towards detection of the machine learning model bias?

We say the model is biased if prediction accuracy for a specific group of population is either very low or very high. Bias in the model is often hidden, and it becomes difficult to determine if the model is biased or not, mainly due to intersectional biases. Detecting bias for a single feature is easy. For example, with race, you can compare accuracy between each race (e.g., the model has lower accuracy for black than for white). But if we look at all the combinations of race, sex, and income, we may have to compare hundreds of groups.

As shown in Fig 5, the low accuracy of the intersectional group, the blue triangles, is hidden by the aggregate accuracy metrics of shape and colour that appear more equal. Likewise, when we look for subgroups of the population, we will likely find similar model biases.

Fig 5: Intersectional bias.

To check if a model is fair towards a set of protected features (i.e., features that represent protected groups) such as age, gender, race, religion, national origin, etc., we can do the following:

  1. If your protected variable is continuous, like age, then perform the correlation analysis between the continuous variable and probability score. If there is no linear trend between the two, the correlation should come close to 0 (as shown in Fig 6), then the model is not biased w.r.t. this variable.

Fig 6: Scatter plot between age and predicted scores.

  1. If your protected variable is discrete, like gender, then plotting the average score by different score bands for different genders should converge if the model has no bias, as shown in Fig 7.

Fig 7: Average scores by score bands.

References

  1. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  2. https://www.usatoday.com/story/tech/news/2016/04/22/amazon-same-day-delivery-less-likely-black-areas-report-says/83345684/
  3. https://www.bloomberg.com/graphics/2016-amazon-same-day/
  4. https://theconversation.com/artificial-intelligence-can-deepen-social-inequality-here-are-5-ways-to-help-prevent-this-152226
  5. Managing Popularity Bias in Recommender Systems with Personalized Re-ranking; Himan Abdollahpouri, Robin Burke, Bamshad Mobasher; https://arxiv.org/abs/1901.07555

Related:


PlatoAi. Web3 Reimagined. Data Intelligence Amplified.

Click here to access.

Source: https://www.kdnuggets.com/2021/06/ethics-fairness-ai.html

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?