Frescodata

The Eightfold AI Lawsuit Is Another Reminder of the Pitfalls of Using AI in Hiring

Eightfold AI has been hit with a lawsuit for providing the tools to screen job applicants without their knowledge. Using AI in hiring comes with risks, particularly when the use of technology remains undisclosed. This could be the start to an era of employees pushing back against wanton AI use in the workplace.

Artificial intelligence has been set up as the pinnacle of efficiency, promising businesses with advanced decision-making support. The technology has found itself a prime role in recruitment and hiring, promising businesses access to the top candidates as it filters out those it deems undeserving. Santa Clara-based business Eightfold AI is a leading provider of recruitment tools for top businesses in the US, its technology designed to help businesses “scale at the speed of agentic AI, backed by deep talent intelligence to build [their] Infinite Workforce.” 

This promise is one they take seriously, but candidates are not so thrilled about its use. The AI recruiting company is being sued for violating the Fair Credit Reporting Act (FCRA), a federal consumer protection regulation that oversees the collection and use of consumer credit information. While an unusual application of the law, the Eightfold AI news shows us that there are many ways that existing legal systems are being used to protect worker interests in the absence of targeted AI regulations.

Eightfold AI lawsuit

Eightfold AI has been hit with a class action lawsuit over its AI applicant scoring systems and the undisclosed use of the technology. (Image: Pexels)

What Is the Eightfold AI Lawsuit About and Why Should HR Pay Attention?

AI recruiting company Eightfold AI is being sued by two job applicants, Erin Kistler and Sruti Bhaumik, who filed a class action lawsuit in California. According to the claims, the two accused Eightfold AI of using “hidden Artificial Intelligence technology to collect sensitive and often inaccurate information about unsuspecting job applicants and to score them from 0 to 5 for potential employers based on their supposed ‘likelihood of success’ on the job.” The problem here is multifold.

The issue begins with AI doing the applicant scoring, rating job applicants on a tight scale on an abstract metric that they aren’t necessarily prepared for. There is also the problem of Eightfold collecting personal data, not just from their resumes to determine job eligibility but also allegedly looking at other aspects like their social media profile, internet activity, and other tracking information to create a profile of the candidate’s behavior, attitudes, intelligence, and other characteristics to assess them on grounds that may not pertain to the job. 

While background checks and social media screenings have long been used in the hiring process, advancements in AI have reshaped just how much access employers have to the personal lives of potential employees, not just hired ones. 

The “Invisible” Threats that Are Gaining Traction in Hiring Have Candidates Feeling Unsettled

Another major aspect of the Eightfold AI lawsuit is that candidates are not informed about the potential use of AI in their screening. This does not allow them the opportunity to structure a response or dispute any of the claims made by the AI-generated report before it is used to make decisions about their job eligibility. This vastly limits how much control candidates have in their job application process, particularly when their personal lives may not be an accurate estimate of their professionalism or ability to perform the roles they may be hired for. 

AI tools and their LLMs are able to gain access to third-party data in a more comprehensive way than ever before, making their intrusion into the online lives of job seekers far more unsettling for them. Most businesses still rely on older tools to make hiring decisions, but in the coming years, when more companies like Eightfold AI gain traction, some employees may find themselves struggling to find employment with these reports regarding their lives swirling in the mix. 

Eightfold spokesperson Kurt Foeller explained to Reuters that some of the claims being made were inaccurate. “We do not scrape social media and the like. We are deeply committed to responsible AI, transparency, and compliance with applicable data protection and employment laws,” he said in a statement.

The Eightfold AI Lawsuit Could Have Far-Reaching Consequences

The Eightfold hiring lawsuit may target only one business that powers the hiring of many Fortune 500 companies, including Microsoft and Salesforce, as well as the US Department of Defense, according to the company’s website; however, the implications of the lawsuit could be far-reaching. The lack of sufficient AI regulations has allowed AI businesses to grow certainly, but this has also created a lawless context for practices that are and are not welcome, especially in hiring. California is one of the key US states that has begun work on regulating the use of artificial intelligence, but progress towards clear standards of practice has been slow to come.

Companies like OpenAI and xAI have been hit with lawsuits due to the AI tools and their encroachment on data, the ability to generate deepfakes and other inappropriate content, and their alleged role in misleading children towards actions with dire consequences. HR management portal Workday has also been sued over the use of AI in allegedly discriminating against an employee and violating the Age Discrimination in Employment Act (ADEA) in its decision-making. While Workday was not the potential employer in the situation, it is being held liable as an “agent” in the claim.

Many believe that rules stymie innovation, but the lack of safeguards poses more threats than the benefits of these AI tools are worth. 

In the Absence of Regulation, Employees Turn to Older Laws

There are no specific laws that target Eightfold’s AI applicant scoring system or the lack of disclaimers regarding the use of AI in hiring. The Civil Rights Council of the California Civil Rights Department approved amendments to the Fair Employment and Housing Act (FEHA) that provide certain regulations on the use of AI and Automated-Decision Systems (ADS), which provide some safeguards for workers against these tools, but there is more work to be done on a comprehensive system of regulations.

The plaintiffs in the Eightfold AI lawsuit have turned to the Fair Credit Reporting Act in order to build their claim, suggesting that the company’s hiring tools robbed job seekers of their right to view and challenge the reports that are used in lending and hiring. While the regulation is typically used for credit assessments, it also mentions “employment purposes,” which can be evoked here.

Preparing A Comprehensive Strategy and Awareness Around AI Use Is Essential in 2026

For now, the businesses that rely on these AI hiring tools appear to be safer from backlash compared to the providers of said tools, but we are in the early days of AI adoption. Without careful planning, supervision, and compliance testing, AI hiring lawsuits could hit businesses themselves, with their teams insufficiently prepared to explain how their tools work and what data is used in hiring assessments. 

Training HR teams and recruiters on understanding the full scope of the AI tools they use is critical to ensuring the tools are used correctly. It is just as important to set clear guidelines on how they are implemented, and what data is collected and evaluated to save organizations a lot of trouble in the long run. To err on the side of caution, informing job seekers about the potential use of technology can also limit some of the resentment that comes with undisclosed use, even if it does pose the risk of turning some job seekers away.

What do you think about the lawsuit against Eightfold AI? Should AI hiring tools be further regulated? Share your thoughts in the comments with us. Subscribe to The HR Digest for more insights into the evolving landscape of work and deep dives into the legalities that surround it.

FAQs

Anuradha Mukherjee
Avatarwp-user-avatar wp-user-avatar-medium photo
Anuradha Mukherjee is a writer for The HR Digest. With a background in psychology and experience working with people and purpose, she enjoys sharing her insights into the many ways the world is evolving today. Whether starting a dialogue on technology or the technicalities of work culture, she hopes to contribute to each discussion with a patient pause and an ear listening for signs of global change.

Similar Articles

Leave a Reply

Your email address will not be published. Required fields are marked *