top of page

Case Study: Amazon's AI Recruitment Tool and Gender Bias

Background and Context:

In 2014, Amazon developed an AI recruitment tool intended to automate the screening process of job applications. The system was trained on resumes submitted to the company over a 10-year period, which resulted in a dataset consisting of resumes from predominantly male applicants, due to the male-dominated tech industry.

Ethical Dilemma

The AI system inadvertently learnt to favor new male candidates over new female candidates, penalizing the presence of words like "women's" and graduating from all-women's colleges.

Outcome and Consequences

Upon discovering the bias, Amazon disbanded the team working on the AI recruitment tool and abandoned the project in 2017. This case gathered large media attention, demonstrating the issue of bias developing in AI systems.

Analysis and Reflection

The bias originated from historical data that reflected an existing gender imbalance in the tech field. The case emphasizes the importance of ensuring that AI systems to do not perpetuate or exacerbate existing biases. This example demonstrates the need for diverse training datasets and the importance of human oversight in AI-driven decision making.

Discussion Questions

- What measures can be taken to prevent similar bases in AI systems, especially in sectors with historical gender imbalances?

- How could Amazon have anticipated the potential for gender bias in their AI recruitment tool?

​

Read the full Reuters report on Amazon's AI tool here and the Business Insider article here

bottom of page