HomeFuture of WorkDigital HRHR TechnologyMaking Artificial Intelligence more intelligent

Making Artificial Intelligence more intelligent

  • 4 Min Read

Artificial Intelligence has proven to be an effective tool in the world of talent. Kay Cooper, Managing Director, RPO, EMEA at Korn Ferry discusses how far this strategy could develop and the advantages to this.

Featured Image

It’s no secret that artificial intelligence (AI) has drastically changed the way organisations recruit talent. From sourcing candidates to assessing their potential, scheduling interviews, and creating compensation models, AI has helped HR professionals quickly and efficiently onboard more qualified candidates.


Not to mention, it’s provided recruiters with more time to focus on other aspects of their job, such as fine-tuning the candidate experience and implementing assessments to ensure the best person is hired.

However, amid all the benefits these tools have provided, recruiters ran into a major problem: in many cases, AI developed a bias against members of under-served groups, leaving many well-qualified candidates in the dust.

Here’s how AI developed certain biases

AI machine-learning tools are built to identify patterns and produce the best results. AI is fed information – such as terms in past job profiles and resumes from past candidates – and, over time, learns to associate those terms or past profiles with candidates that seem to have the greatest chance for success.

Because many women, LGBTQ employees, older workers or people of colour have traditionally not filled certain roles, when AI matched past resumes or job profiles to new ones, it mistakenly taught itself taught itself to look a narrower group of candidates.

As it turned out, for many roles AI was favouring resumes that contained masculine language – such as “executed,” “takes charge,” and “competitive” – over feminine language – such as “collaborative,” “supportive,” and “committed to understanding.”

Reprogramming AI to be more neutral

Suffice it to say, it’s imperative to retrain AI recruiting tools to take out the bias and have a more holistic approach. To do this, programmers must first remove the gender, age and race-biased data from the past algorithms and build new “success profiles” that can teach AI what success can look like rather than strictly what it has looked like.

That way, AI can evaluate the skills and capabilities needed for a specific role – such as the ability to code a specific computer program – rather than solely looking at specific phrases and previous success stories.

Recruiters must also make a point to eliminate gendered language from job listings. Traditionally, job listings have contained language that can deter candidates from applying to certain positions, such as “digital native” that is biased against older candidates.

In one study, researchers from University of Waterloo and Duke University found that job listings in male-dominated fields – like software programming – used more masculine language, like “leader” or “dominate.” Such language, they learned, made the positions less appealing to women. Even including mandatory requirements – such as personal qualities or years of experience – can influence who applies. Research has shown that many women won’t apply for a position unless they meet all of the requirements, whereas men will apply if they meet just 60 percent of the listed requirements.

To combat this, many HR professionals are now AI tools to quickly scan and remove any masculine or feminine-coded phrases from the ad. The more gender neutral a job description is, the wider – and more inclusive – of a net you’ll cast.

Anonymising resumes until later in the hiring process

Oftentimes, our unconscious biases influence who we think a strong candidate is. However, research has shown that those biases we may have about an applicant’s age, gender, or university are oftentimes discriminatory. In general, those beliefs don’t accurately align with a person’s potential or performance.

Fortunately, blind screening – where personal information such as the name, college, date of birth, and location are not revealed until later in the hiring process – has become increasingly popular, especially in the UK.

AI can anonymise resumes – at least during the beginning stages of the hiring process – and allow recruiters to evaluate the talent pool more fairly, without any preconceived notions or biases. As a result, blind screening can help build a more diverse workforce, which has been linked to better group problem solving and higher financial performance.

When used effectively, AI can create more diversity

All in all, the success of AI boils down to whether it is built to produce fair, objective results. In order for it to do so, recruiters need to periodically evaluate the data that’s being fed to AI along with the results it produces to ensure there’s no bias.

Experts agree that AI remains an effective tool in sourcing top talent. As long as it’s being built and evaluated properly, AI can and will help reduce human bias and build diversity in the workplace.

 

 

Was this article helpful?

Subscribe to get your daily business insights

Events

HRD Roundtable: Combating 'Quiet Quitting'…

08 June 2023
  • E-Book
  • 55y

HRD Network Roundtable: The Retention…

15 June 2023
  • E-Book
  • 55y

Manage change and drive value…

01 June 2023
  • E-Book
  • 55y
Sign up to our Newsletter