There is one important fact about AI solutions that many of my clients don’t understand and that is your AI hiring tool is only as fair as the data it was trained on. And the fact that these tools may be introducing bias far greater than the human element is one of the big reasons AI tools are not universally used right now. Today, I’ll introduce this topic and share some data to help define the issue. Next week, we continue the discussion by focusing on how bias enters the system and review a case study to tie everything together.
Let’s say your company’s automated screening tool has just processed 10,000 applications. It has ranked the top 200 candidates to advance to human review. The tool is fast, consistent, and has never once been asked to explain itself. The question every responsible recruiting leader must now ask is “Who didn’t make it through and why?”
The promise of AI-driven hiring was always equity: remove the human from the loop, remove the bias. The reality (which is now well-documented in courtrooms) is far more complicated. Automated screening tools don’t start with a blank slate. They learn from historical data and if that data has a long history of discriminatory hiring patterns, the algorithm doesn’t correct them. It replicates them, at scale, in milliseconds, invisibly.
This isn’t exactly a new problem. I’ve been addressing “Bad data in, bad data out” issues for years when implementing new applicant tracking systems with my clients. One of the very first LI articles I ever wrote addressed the subject (3 Reasons Why Your ATS is Just Fine ).
This is the defining talent ethics challenge of our era. And unlike many HR risks, it carries genuine legal, financial, and reputational exposure that grows more serious as AI tools become more popular.
The Data Behind the Problem Is Damning
The evidence is no longer theoretical. In 2024, University of Washington researchers conducted one of the most rigorous studies yet on AI resume screening, testing three state-of-the-art large language models (LLMs) across more than 550 real-world resumes and generating over three million comparisons. They varied only the names on each resume — swapping names associated with different racial and gender identities on otherwise identical applications.
“The systems never preferred Black male-associated names over white male-associated names — not once across millions of comparisons.” – University of Washington LLM Bias Study, 2024
White-associated names were favored 85% of the time. Female-associated names were preferred only 11% of the time. And in what the researchers described as a “distinct intersectional harm”, Black male applicants fared worst of all — never preferred over white male applicants across any of the tested configurations.
A separate large-scale study published through VoxDev in 2025 confirmed similarly complex intersectional patterns across five LLMs, finding that the biases “differ markedly from typical human biases documented in previous labor market studies” — meaning AI doesn’t just mirror existing prejudice. It can create new and unexpected patterns of discrimination that wouldn’t show up if employers only tested for race or gender in isolation.
Amazon’s Scrapped Hiring Engine
Between 2014 and 2018, Amazon developed an AI recruitment tool trained on previously hired candidates’ credentials. Engineers eventually discovered it was systematically downgrading resumes from female applicants, not because gender was an explicit input, but because the tool had learned to use indirect markers like “captain of the women’s chess club” as proxies for female identity. The tool was quietly shut down. It is a textbook illustration of how historical hiring patterns contaminate machine learning pipelines.
The Legal Landscape Is Moving Fast — and It’s Not Moving in Your Favor
Compliance teams should understand that “AI did it” is not a legal defense. The EEOC has made this explicit: if your vendor’s tool produces discriminatory outcomes, you bear liability as the employer. This was made famous by a 2025 federal ruling in the ongoing Mobley v. Workday class action suit, which goes even further, determining that AI tools can be considered an “agent” of the employer, a key legal term with profound implications for how organizations think about vendor accountability.
In 2023, the EEOC settled its first AI hiring discrimination case against iTutorGroup after the company’s algorithm automatically rejected female applicants aged 55 and older and male applicants aged 60 and older — screening out over 200 candidates for no reason other than age. The settlement: $365,000, mandatory policy reform, and ongoing EEOC monitoring. It was a preview of what was coming.
Regulatory Snapshot · 2025–2026: What the Law Now Requires in Key Jurisdictions
New York City: Annual independent bias audits for any automated employment decision tools (AEDTs) used in hiring or promotion. Results must be publicly posted. Candidates must be notified at least 10 business days before an AEDT is used, with the option to request an alternative process.
California (finalized 2025): Employers and vendors are jointly liable under Fair Employment and Housing Act (FEHA) for discriminatory effects of AI tools. Mandatory bias testing, four-year record-keeping, and meaningful human oversight with authority to override AI recommendations.
Illinois: The Artificial Intelligence Video Interview Act regulates employer use of AI in video interview screening, with potential additional liability under the Biometric Information Privacy Act.
Federal: The EEOC’s Strategic Enforcement Plan (2023–2027) identifies AI and automated decision-making as a priority enforcement area. Title VII, the ADEA, and the ADA all apply regardless of whether a human or algorithm makes the decision.
In 2024 alone, AI-powered hiring tools processed over 30 million applications while triggering hundreds of discrimination complaints. Regulators in multiple jurisdictions are now treating algorithmic bias not as a technical edge case, but as a mainstream civil rights issue. If this isn’t concerning to recruiting leaders, it should be.
While this problem looks to only grow larger, all is not lost. Next week, I’ll share where bias can enter the process and what to look for when auditing.
Have you seen other examples of bias entering the recruiting process? Share your thoughts in the comments below.
ES Talent Solutions helps organizations navigate the intersection of recruiting strategy and emerging technology. Want to discuss how agentic AI could transform your talent acquisition function? Contact Eddie Stewart at estewart@ESTalentSolutions.com. I’m always happy to talk with fellow leaders about building recruiting functions ready for the future.





0 Comments
Interview Intelligence: AI Watching, Scoring, and Judging Your Candidates
Making the case for AI in the interview process and understanding where and why to use in are important in creating recruiting departments of the future
Say Goodbye to “Figure it Out Yourself” HR. Say Hello to Wowledge CoCreator™
Wowledge's new tool, CoCreator, is a must have for human resources professionals.
The Ethics Audit: How to Ensure Your Automated Screening Tools Aren’t Biased (Part 2)
As automated tools play a larger part of the interview assessment process, companies need to understand the implications of using such tools and how to ensure they're doing what they're supposed to.