STEP INTO THE WORLD OF TECHNOLOGICAL INSIGHTS AND INNOVATION

AI bias in Hiring

Feb 20, 2024

AI-Bias-in-Hiring---How-to-Comply-with-EEOC-Guidance

Artificial intelligence’s (AI) rapid expansion into everyday life hasn’t left human resources behind. AI is the great equalizer, using facts and information it gathers with machine learning (ML).

Most of us mistrust AI today, as we are unsure if we can trust machines to make life-changing decisions for us.

It’s no different in HR. New guidelines from the U.S. federal government are making a statement: don’t just rely on AI. Let’s break down the problems, opportunities, and future of AI by looking at how you can apply it to your HR practices without allowing bias to interfere with decisions.

What is AI, really?

McKinsey & Company defines AI as “a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem-solving, and even exercising creativity.” It’s been behind the scenes for about 70 years in robotic process automation (RPA) and calculators, but today’s Generative AI and machine learning have become a transformative force.

How is AI used in the hiring and promotions process?

AI can efficiently and effectively filter through the applications received for a job opening in your company. It can sort resumes for specific skill sets that fit employer expectations. It can also support hiring managers in comparing various applicants, taking small nuggets of information about applicants, and planning decisions with it.

Sounds beneficial—so why so much mistrust?

AI isn’t something out of movies like The Matrix or Terminator, where robots take over the world. Instead, the problem is that it gets things wrong. A flaw in AI learning data can easily translate into inaccurate AI output you shouldn’t rely on. Uncertainty about the accuracy of the information can make using AI worrisome.

How AI bias occurs in hiring and promotions

AI seems to promise the same as blind hiring, restricting access to demographics to limit bias in decisions. The job site Zappia reports that blind hiring:

  • Increases the likelihood of women being hired by as much as 46%.
  • A name that “sounds white” on a resume is twice as likely to receive a callback for an interview as a Black-sounding name.
  • However, nearly half of all hiring managers recognize bias in their decision-making.

That’s why many companies have turned to AI and blind hiring to combat bias.

In the 2023 Talent Index report from lifecycle management platform Beamery, 59% of polled applicants looking for jobs say companies used AI during the recruitment process.

AI is a tool like many other tools hiring managers, and HR directors can use to help plan decisions. However, AI has one big (super important) limitation: it decides based on the information used to train it. The quality of AI training data matters.

In other words, “garbage in, garbage out.”

Training data can be any dataset you decide to use, from your history to the entire internet. If that training data contains any bias, the decisions made by your AI tools will contain that bias.

Why Is AI Data Biased?

You may think that hiring managers have moved away from allowing bias for any reason (discrimination, age, disability, and other protected classes). Yet, that’s not the case. Zappia reports that 85 to 97% of hiring managers rely on intuition (which could mean a gut feeling or an unrecognized bias) to make decisions.

AI often uses historical data to plan decisions. What happened in the past is the best representation of the future, right? Now, think about hiring 20 or more years ago. Could you safely say there was no bias then? If the historical data the AI tools use lacks diversity, then the AI tool is going to make hiring and promotional decisions that lack diversity as well.

There are various types of bias present in AI algorithms that could influence decision-making. HR Morning noted these four examples:

  • Sample bias: The AI training doesn't reflect real-world makeup, such as one group being over or under-represented.
  • Representation bias: Poor data collection methods do not accurately account for anomalies, such as diversities in the local population or all demographics.
  • Algorithmic bias: This area of concern occurs in the algorithm, within the neural network, and prior information that the AI tool has.
  • Measurement bias: Mistakes in the training dataset can lead to poor decision-making.

How does AI affect individuals and businesses?

Look deeper than just missing out on a job. AI has a powerful ripple effect that’s difficult to ignore within the employment world. If any diversity or other flaw in the AI programming sneaks in, it will create a big problem with the talent the company has over time, making decisions for the company, and representing the organization’s workforce.

Bernard Marr believes AI will affect society at all levels, including the economy, the law, politics, and regulation of all jobs in all industries.

What Does the EEOC Say?

It’s critical to consider how AI affects the Equal Employment Opportunity Commission (EEOC) rules all companies must follow. The EEOC released specific requirements in May 2023 that put pressure on companies to ensure no bias exists in using AI tools during the hiring process.

How will HR directors make decisions about using AI in legal matters? Could a company face liability for the decisions AI bots make if they violate compliance or regulatory requirements?

How we can overcome AI bias in hiring and promotions

There are ways to reduce the bias of AI in hiring and promotions that you can implement (no tech degree needed here!)

1. Train AI datasets to offer the most comprehensive data.

Just like training the new hires coming into your company, training AI with unbiased datasets is the critical first step in mitigating this problem. One way to support this is by using both in-house and third-party data resources, widening the pool of information available.

2. Train HR teams to use AI.

Every person involved in any aspect of using AI must be able to identify and solve any potential bias present before decisions happen. This includes monitoring for bias across all planes and looking beyond historical data. Use simulations to uncover discrepancies and bias in all protected areas.

3. Ensure your company’s bias and fairness standards are clear.

It’s impossible to take the other steps if the company’s bias and fairness standards are unclear. Instead, ensure leadership defines expectations in this area well to eliminate or reduce the risk of bias and to inform hiring teams of what is and is not acceptable.

Many companies are already using bias-free AI effectively. For example, Electrolux, the home appliance manufacturer, felt the pressure from an aging-out workforce and lack of talent. They turned to AI to help them overhaul their marketing and hiring efforts. Their recruiters had automated nurturing campaigns created that helped provide clarity to applicants on job opportunities but also helped align objectives in preferences, interests, and career objectives.

Stanford Health Care is another example. They created an AI chatbot that helped streamline their otherwise complicated hiring process. It enabled candidates to complete the application process over time on their cell phones and then offered relevant job matches.

How Can Individuals Protect Themselves from AI Bias in Hiring and Promotions?

AI doesn’t get to make all the decisions. It shouldn’t scare you either, especially since this is more of an adapt-and-overcome situation (AI isn’t going anywhere).

Organizations like the American Civil Liberties Union (ACLU) are working to ensure companies recognize the risks of AI bias in hiring and promotions and adjust for it. They recommend:

  • Laws that require employers not to use biased datasets.
  • Improved transparency in using AI in decisions.
  • Ensure AI tools do not impact privacy requirements.

Most importantly, employees should clearly understand that companies must test AI and ensure it is not violating the law. If you are the victim of this or believe that to be the case, it may be ideal to speak to the hiring team or leadership to communicate that factor.

The U.S. White House Guidance on Discrimination Protections encourages disparity testing and mitigation efforts whenever companies use datasets like this. It’s working on what it calls “a Blueprint for an AI Bill of Rights.

What Is the Future of AI In Hiring and Promotions?

Artificial intelligence will continue to move into all aspects of hiring and promotions. Supply Chain Brain offers clarity, especially on how AI can support blue-collar companies and workers, saying that the best route forward is a “harmonious collaboration, where the strengths of both AI and human expertise unite to forge an optimal recruitment approach.”

Using AI in the initial stages of candidate identification and matching skills to needs is a solid place to incorporate AI’s ability to process data quickly. This, along with recruiters who are exceptionally skilled in using AI, can help to create better outcomes for companies.

Will that happen? It’s going to take some work, but it’s the goal.

 

Have you formed your AI Governance team?

Read about how you can lead your organization to well-governed artificial intelligence to ensure your success and compliance.

Download your free eBook:

AI-Bias-in-Hiring---How-to-Comply-with-EEOC-Guidance-CTA

About Pixentia

Pixentia is a full-service technology company dedicated to helping clients solve business problems, improve the capability of their people, and achieve better results. 

Previously:  Next up: 

Share

News Letter Sign up

Get in touch with us
phone_footer.png  +1 903-306-2430,
              +1 855-978-6816