In recent years, Artificial Intelligence (AI) has transformed recruitment, offering quicker and more efficient processes. By automating tasks such as CV screening and video assessments, AI-driven platforms can handle an immense volume of applications.
However, beneath the surface lies a growing concern that these technologies may unintentionally hinder diversity, equity, and inclusion (DEI) efforts. Could your algorithm be undoing the progress you’ve worked so hard to achieve?
AI systems rely heavily on data to make decisions, and this is where the issue begins. If the data used to train these algorithms reflects historical biases - such as favouring specific educational backgrounds, gender, or ethnic groups - AI can inadvertently replicate and amplify these biases. Research from the National Bureau of Economic Research (NBER) shows that AI models trained on biased data can perpetuate existing inequalities, filtering out diverse candidates without the recruiter even knowing.
A widely discussed case is Amazon’s AI recruitment tool, designed to automate CV screening. The tool, however, was found to penalise CVs containing words like “women’s,” such as "women’s soccer club" or "women’s college." The issue arose because the AI had been trained on a dataset of predominantly male CVs, mirroring Amazon's historical hires. Ultimately, Amazon abandoned the tool, acknowledging that it was reinforcing biases.
ProPublica’s investigation into AI-based criminal justice tools like COMPAS highlighted similar concerns. These tools disproportionately flagged minority defendants as high-risk, even when profiles were identical to those of white defendants. While this example is outside recruitment, it demonstrates how biased data can skew AI's decision-making, with serious implications for equality.
Recent studies have illuminated the persistence of AI bias in recruitment. A report from MIT Sloan Management Review found that AI models tend to favour candidates who resemble past successful hires, even in organisations striving for diversity. These systems often replicate hiring decisions based on historical preferences, which can hinder efforts to bring diverse talent into the fold.
In a separate study published by Harvard Business Review, AI tools used in video interviews were found to introduce biases against candidates based on names, postal codes, and even speech patterns. AI systems that analyse candidates’ voice inflection or facial expressions during interviews could disadvantage neurodiverse individuals or those with disabilities.
We discussed why some neurodivergent candidates might struggle with eye contact, for example, in one of our previous Outspoken articles - ‘Beyond the gaze: Why your best candidate might not look you in the eye’.
While AI can certainly expedite recruitment processes, it’s clear that without proper oversight, it can inadvertently perpetuate systemic biases.
At Stanton House , we integrate technology and AI into our recruitment processes to improve efficiency, but with great care. Every tool we implement is rigorously assessed to ensure it enhances rather than compromises the experience of both clients and candidates. Our guiding principle is that AI should never come at the expense of personal engagement and service quality.
We believe that AI will never fully replace the need for human recruiters. While AI can process data rapidly, it lacks the nuance to understand individual circumstances or to evaluate the diverse, multifaceted needs of today’s workforce. Our recruiters ensure that AI-driven insights are balanced with human judgement, particularly when building diverse shortlists. Human oversight remains essential in maintaining fairness, ensuring cultural fit, and safeguarding DEI goals.
Several organisations have taken proactive steps to mitigate AI-related biases in recruitment. For example, Unilever uses AI to screen entry-level candidates but has developed mechanisms to remove bias. By anonymising applications (excluding names, gender, and educational background) and auditing AI decisions regularly, Unilever has boosted diversity within its candidate pool.
Similarly, LinkedIn has developed fairness algorithms to monitor and correct any disparities introduced by its AI-driven tools. Such continuous monitoring is essential for ensuring that AI systems do not unintentionally disadvantage certain candidate groups.
For companies eager to harness AI while upholding their diversity commitments, there are several critical steps to follow:
AI holds enormous potential to revolutionise hiring, making it more efficient and data-driven. However, without vigilant oversight, AI can reinforce existing biases, particularly those related to diversity and inclusion. As companies adopt more AI tools, it is crucial to remain proactive in ensuring these technologies align with DEI goals.
At Stanton House, we are committed to balancing the power of AI with human insight to ensure a fair, personalised, and inclusive recruitment process. By taking a balanced, thoughtful approach, organisations can leverage AI while upholding their commitment to diversity, equity, and inclusion.