The Alarming Erosion of Human Autonomy in AI-Assisted Hiring

In a finding that challenges the notion of Artificial Intelligence as a guaranteed solution for bias in recruitment, new research from the University of Washington (UW) suggests that humans working alongside biased AI are highly likely to reinforce, rather than correct, algorithmic inequities.
The study, one of the first to deeply explore the interactive influence of AI on human decision-making in the hiring process, raises urgent concerns for corporations rapidly integrating large language models (LLMs) into their Human Resources functions.
The Unexpected Influence of the Machine
The widespread adoption of AI in resume screening is often promoted under the premise that algorithms can offer objective, data-driven candidate assessments, thereby neutralizing unconscious human bias.
However, the UW research indicates that when a hiring manager is presented with a recommendation from a flawed system, they tend to defer to the machine, effectively allowing the bias to spread.
Researchers conducted an experiment where participants were tasked with reviewing applications and selecting suitable candidates for a given job.
The résumés had already been “scanned” by LLMs pre-trained with varying degrees of bias concerning race-associated names.
When participants acted alone or with a “neutral” AI, they chose candidates of different racial backgrounds at relatively equal rates.
AI-Assisted Hiring: Mirroring Algorithmic Flaws
The core finding was stark: when participants collaborated with AI models exhibiting a moderate bias, their selection choices heavily mirrored the program’s existing preferences.
This mirroring of existing preferences was the study’s central outcome.
This tendency proved robust even when the bias contradicted common stereotypes about race and occupational status.
In the most extreme scenarios, where the AI was heavily biased, human participants went along with the program’s picks.
This deference occurred approximately 90 percent of the time.
This demonstrates a concerning level of reliance—or overreliance—on the AI’s output.
While humans make the final decision, the study indicates they are not performing adequate due diligence to mitigate the algorithmic flaws.
This failure effectively turns the human supervisor into an enabler of systemic bias.
Expert Commentary and The “Power Tool” Risk
Experts warn that this phenomenon highlights the critical need for governance and training around AI tools.
According to scholars in the field, AI acts like a “power tool.”
It is highly effective in the hands of experienced users, but capable of causing significant damage when given to a novice.
Lisa Simon, chief economist at Revelio Labs, noted that the study underscores the risk of AI reinforcing human bias.
This is in direct contrast to achieving the goal of equitable hiring.
For many HR teams, the drive for efficiency is paramount, but this focus comes with a major caveat.
As Sara Gutierrez, a chief science officer at an HR solutions firm, stated, “Efficiency gains you get from an AI tool or process mean nothing if that tool isn’t reliable or fair.”
She added, “Speed without accuracy is just going to get you to the wrong outcome faster.”
The research concludes that companies must prioritize transparency in their AI deployment and actively train human recruiters.
This training must focus not just on using the tools, but on critically scrutinizing their outputs.
Otherwise, the integration of AI risks solidifying historic employment biases under a veneer of technological objectivity.
Note: We are also on WhatsApp, LinkedIn, and YouTube to get the latest news updates. Subscribe to our Channels. WhatsApp– Click Here, YouTube – Click Here, and LinkedIn– Click Here.