Frescodata

Human Recruiters May Mirror AI Biases: Revisiting the Role of AI in Hiring

Businesses have committed to handing off hiring responsibilities to AI, but little is being done to address the problem of bias in AI hiring. A survey from Resume.org previously suggested that 57% of businesses are already using AI in their hiring process. The survey also told us that 1 in 3 companies believe AI will run their entire hiring process by 2026. Recruiters and hiring teams have seen a considerable number of advantages of relying on this technology in their hiring process, but there are emerging questions of whether AI tech is sufficiently ready to take over the process independently. 

A new study revealed that human recruiters are growing so reliant on the technology that they are willing to accept the AI’s bias in hiring and make decisions based on its recommendations. Instead of fixing the AI hiring bias, many businesses are starting to become more complacent with accepting the decisions of the technology without question, which is a risky path to tread down right now.

human recruiters AI bias

A new study revealed that human recruiters are susceptible to accepting AI’s bias in hiring, mirroring its assessments without question. (Image: Pexels)

Human Recruiters Often Accept AI Bias Without Question: Why Supervision and Intervention Matter

A new University of Washington study conducted on 528 participants assessed the impact of bias in AI hiring by simulating situations where candidates were asked to make hiring decisions for 16 different jobs. The study found that while picking candidates without AI assistance or with an AI that was simulated to make neutral decisions, participants selected white and non-white candidates at an equal rate. 

On the other hand, when different rates of bias were simulated in the LLM recommendations, the participants also leaned towards the direction of the bias, either polarized towards the white or non-white candidates. In cases where the bias was dialed up to extreme severity, participants made decisions that were only slightly less biased than the recommendations made by the AI.

Accepting such AI bias in hiring can be extremely detrimental to the well-being of any business, especially when these tools are allowed to make decisions without supervision. 

Is AI’s Bias in Recruitment Really That Concerning?

The presence of bias in hiring is not particularly new, as individual preferences and predispositions often find a way of leaking into decisions made by a recruiter. In the past, training to address and eliminate this bias was an essential way of addressing these reservations, with other aspects of the employees’ workplace behavior providing clear signs when they held biased opinions. A diversified recruitment team also helped maintain a balance within the workplace, ensuring all decisions were reviewed with care.

The presence of bias in AI-supported hiring decisions can be much harder to detect and correct. When relying on externally provided AI tools, it is hard to determine where the training material is being acquired from. A limited amount of training may be provided internally with clear protocols on how to make hiring decisions, but without supervision, it will be hard to identify when these protocols fail. 

In time, human recruiters can grow complacent towards AI bias, more comfortable with allowing it to make decisions rather than reviewing their legitimacy. Such bias in AI hiring tools can cause companies to lose out on talent, skewing the composition of the workforce without reason. On the legal front, accepting an AI’s bias in hiring can also open the company up to allegations of discriminatory behavior if unhappy candidates determine that they were unfairly treated.

Regulations To Address AI Use In Recruitment Are On Their Way

It is up to recruiters and HR leaders to fix the AI bias in hiring and ensure that the tool works as intended. California is leading the way with regulations on AI use in the workplace. There are a few different policies under discussion, primarily in hiring, to ensure that major decisions are not left to AI tools. While businesses might be excited about AI’s potential takeover of recruitment, it’s important to ensure that the final decision is always made by a human supervisor. 

This is beneficial not just for businesses that want to stay out of legal trouble, but those that want to ensure that their internal systems are of the highest quality. Until these regulations are formalized and make an appearance in the workplace, businesses must maintain ethical standards in hiring and ensure that there is no room for bias and discrimination across their operations. 

 

What do you think about these reports of human recruiters getting comfortable with allowing AI bias into hiring procedures? Share your thoughts with us. Subscribe to The HR Digest for more insights on workplace trends, layoffs, and what to expect with the advent of AI. 

FAQs

Anuradha Mukherjee
Anuradha Mukherjee is a writer for The HR Digest. With a background in psychology and experience working with people and purpose, she enjoys sharing her insights into the many ways the world is evolving today. Whether starting a dialogue on technology or the technicalities of work culture, she hopes to contribute to each discussion with a patient pause and an ear listening for signs of global change.

Similar Articles

Leave a Reply

Your email address will not be published. Required fields are marked *