Get your free essentials of employment low manual

Navigating the AI Minefield: HR Grapples with Bias and Privacy Concerns

Experts warn that AI tools in HR can lead to bias and privacy concerns. While these tools can aid diversity and inclusion initiatives, their effectiveness depends on proper development and implementation. Employers must be mindful of potential issues and take proactive steps to guard against bias and liability.

Get ready to experience the ultimate innovation showdown, where the cutting-edge Artificial Intelligence tools take center stage. These groundbreaking tools hold the key to unlocking a wealth of potential benefits, but let’s face it, keeping up with their rapidly evolving developments can feel like you’re caught in the middle of a tornado of information. As Victoria Lipnic, the head of the Human Capital Group at Resolution Economics, aptly puts it, “it’s like having Everything Everywhere All at Once,” just like the Oscar-nominated film! But fear not, as Lipnic, a former chair of the U.S. Equal Employment Opportunity Commission under Presidents Barack Obama and Donald Trump, moderated a panel on AI that took place on Feb. 28 at the Society for Human Resource Management’s Employment Law and Compliance Conference in Washington, D.C. So, brace yourself for an exciting ride, as we explore the latest AI developments and the challenges employers face in this thrilling arena!

As Victoria Lipnic, head of the Human Capital Group at Resolution Economics, puts it, AI tools offer employers “more, faster and hopefully better.” That’s right, these cutting-edge tools can evaluate large databases to determine work outcomes such as performance, turnover, absenteeism, injury reduction, or sales, according to Eric Dunleavy, an industrial/organizational psychologist and director of the Employment & Litigation Services Division at DCI Consulting.

AI HR Bias

But wait, there’s more! These incredible AI technologies can also be designed to achieve diversity and inclusion objectives, as Dunleavy highlights. “When the tools are developed and monitored correctly, they can work,” he stressed. However, therein lies the challenge, as using AI tools may result in bias against certain demographic groups, warns management attorney Savanna Shuntich of Fortney Scott in Washington, D.C.

The concept of “algorithmic bias” is a major concern in the employment context. This bias can take many forms and may occur unintentionally, even if the employer or the AI tool vendor hasn’t taken any discriminatory actions. For instance, algorithmic bias may arise if the data used to train an AI tool includes criminal records. As Shuntich points out, using an individual’s criminal history in employment decisions could result in race, color, or national origin discrimination, which is a violation of Title VII of the Civil Rights Act of 1964.

Moreover, “machine learning” can also contribute to bias. This happens when the AI tool learns bias over time, even if it’s specifically trained against it, as Shuntich explains. So, while AI tools offer numerous benefits, it’s crucial to ensure they’re used ethically and with caution to avoid perpetuating discrimination. Get ready to explore the fascinating world of AI and the challenges employers face in harnessing its power!

Artificial intelligence (AI) is a powerful tool that can help employers make better and faster decisions. But as with any tool, it comes with its own set of challenges. According to Victoria Lipnic, head of the Human Capital Group at Resolution Economics, keeping up with the rapidly evolving developments in AI can seem like “Everything Everywhere All at Once.”

One of the major challenges that employers face is avoiding bias in the use of AI tools. While these tools can be designed to achieve diversity and inclusion (D&I) objectives, they can also unintentionally result in discrimination against certain demographic groups. This is known as “algorithmic bias,” and it can occur in many forms.

For example, an AI tool that is trained on a database of top-performing employees may develop a preference for certain demographics, even if the programmer tells it not to select based on race or gender. Similarly, an AI assessment used in the selection process may pose challenges for people with disabilities, who may have difficulty accessing and understanding the criteria being evaluated.

To guard against bias and potential liability, employers must provide notice and obtain consent from applicants and employees before using AI tools. This includes explaining what type of tool is being used and providing enough information for individuals to understand the criteria being evaluated. Employers must also be mindful of the way their AI tools can affect people with disabilities and be prepared to provide reasonable accommodations.

By taking these steps, employers can harness the power of AI while avoiding bias and promoting diversity and inclusion in the workplace.

Artificial intelligence (AI) tools, though designed to help employers make better decisions, could be causing more harm than good due to algorithmic bias. The problem arises when the AI tool is developed and monitored incorrectly, leading to bias against certain demographic groups, according to experts in the field.

Imagine this scenario: an employer wants to find workers like its top-performers, so it uses an AI tool to evaluate data related to those employees. However, if the database isn’t diverse enough, the AI tool may develop a preference for a certain gender or race, even if the programmers tell it not to select based on these criteria. This is just one example of how AI can lead to bias and potential discrimination against certain groups.

Additionally, AI tools conducting background checks can pull data from a much broader area than traditional selection tools, such as social media, and trigger data privacy concerns as well. This means applicants may not be aware that their social media profiles are being evaluated as part of the selection process.

To prevent this, employers need to be mindful of the way their AI tools can affect people with disabilities and ensure that they have the right to ask for a reasonable accommodation. Notice and consent are also critical, with the former providing an explanation of what type of AI-enabled tool is being used and the latter obtaining consent from the applicant or employee.

At the federal level, there is no current law on notice and consent, but the Biden administration is looking into it. Meanwhile, employers can review the White House Blueprint for an AI Bill of Rights to get an idea of the administration’s focus. State laws are also important, with New York City’s law requiring employers to conduct bias audits on automated employment decision tools, including AI tools, and provide notice about their use to employees or job candidates.

In summary, employers need to understand where and how they use AI, conduct a proactive review of their AI use, and ensure that their AI vendor is transparent and has a good track record. With these precautions, AI tools can provide valuable insights and help achieve diversity and inclusion objectives.

FAQs

Diana Coker
Diana Coker is a staff writer at The HR Digest, based in New York. She also reports for brands like Technowize. Diana covers HR news, corporate culture, employee benefits, compensation, and leadership. She loves writing HR success stories of individuals who inspire the world. She’s keen on political science and entertains her readers by covering usual workplace tactics.

Similar Articles

Leave a Reply

Your email address will not be published. Required fields are marked *