The induction of Artificial intelligence is gradually impacted the recruitment landscape. Dr Nathan Mondragon, Chief Industrial Organisational Psychologist at HireVue compares the bias nature of human against Artificial Intelligence.
While we tend to focus on the fears of an autonomous future, there has been little discussion about how machines are actually benefiting industries and augmenting humans. For instance, in the recruitment industry, there has been a significant uptake and investment by companies of all sizes, integrating Artificial Intelligence (AI) and machine learning into the hiring process over the last few years. Companies are employing these solutions in order to save time and money for both themselves and for candidates, as well as to help them widen their talent pool and hire greater and more diverse talent.
However, there are still question marks over this new way of hiring. One of the loudest arguments I hear against the integration of algorithms in recruitment is the fear that AI can create systematic biases against social demographics, doing more harm than good in the long term. While this is certainly a valid question, we must first understand how traditional hiring processes compare against it, before providing a definitive answer.
Hiring bias isn’t an issue that’s unique to algorithms. Unconscious human bias in traditional interviews has been a cause for concern for recruitment teams and candidates for decades. You’ve probably either been on the receiving end of this bias in your own job searches, or, if you’ve had to conduct interviews, you’ve unknowingly applied your own unconscious bias to your decisions. Typically, we favour candidates from similar backgrounds to ourselves or compare candidates against previous employees – even if we aren’t consciously aware of it!
Combine this information with the belief that we are more effective interviewers than we truly are and it becomes abundantly obvious that the traditional interview process is flawed. As an interviewer, we are affected by our mood, state of hunger and any number of unpredictable factors which influence our feelings on the day. We overestimate our abilities in other areas of life too – for example, research revealed that most people are not as good at driving as they think they are. On the road, this means we overestimate our parallel parking skills. In hiring, this means we interview with a varying level of inconsistency. The difference is that the hiring process can change a person’s livelihood and future.
Fast forward to the end of the hiring process and it still makes for bleak reading. Most candidates rarely receive a response from companies, or if they do, it usually takes weeks. If candidates do receive feedback on unsuccessful applications, it is typically short and vague. An even further extreme, which is also alarmingly commonplace, are real-world results such as the outright dismissal of candidates with ethnic-sounding names. Once you pause and take stock of the problems within traditional hiring, you can understand why companies are searching for ways to improve it.
Which is why many are turning to AI. While clickbait headlines initially labelled it as “scary” and “strange,” there is compelling evidence for AI as an effective solution to democratise the hiring process and overcome many of the issues outlined above. However, that isn’t to say AI doesn’t come without issues. In the past year alone, multiple stories have appeared in the media, suggesting that algorithms are inheriting and scaling human bias in recruitment. For instance, Amazon recently announced it had abandoned development of its AI recruiting tool, after it taught itself that male applicants were preferable to women due to the majority of resumes it processed had come from men over the past decade, and those resumes were the input data used to train the algorithm.
Without rigorous testing and validation of the data sets and the resulting algorithms themselves, AI can recreate the worst of the traditional hiring process with unfortunate consequences for both candidates and companies.
That doesn’t mean we should avoid the application of AI and machine learning to the hiring process. It actually means that any team developing AI technology for hiring must approach it with diligence and due process in order to ensure a fair shot for all qualified candidates. It’s important to remember that AI cannot learn to avoid particular social groups by itself. They will only begin to form biases if corruption occurs when the tool is configured, with either biased data points or biased trainers.
There are many academics, policy experts, and workplace psychologists that, working in concert with ethical data science, have dedicated themselves to the accurate and objective application of AI in hiring, to avoid this situation from occurring, or to mitigate any similar situation that might arise.
We shouldn’t be asking if AI hiring is currently perfect – it isn’t. However, if asked ‘are AI-facilitated interviews better than traditional hiring practices’ then the answer would unequivocally be ‘yes’. We know AI and machine learning technology create tangible hiring improvements here and now. For example, Unilever increased diversity by 16 percent after implementing an AI-based recruiting approach. When done properly, technology already has the capability not only to mitigate bias in hiring, but also help companies identify biased patterns in their own hiring to date — and then work to solve them.
If we wait until technology behind AI’s application in recruiting and hiring is perfect, we’ll never innovate. There are always going to be adjustments that can and will be made to improve algorithms, but if we continue to wait and continue using traditional methods of hiring, we’ll do more damage than harm to job seekers who both deserve to be judged on the basis of their merits alone and who are under increasing pressure to showcase their skills differently as the workplace becomes ever more dynamic.
Given the critical nature of the hiring process to both the candidate and company, a carefully crafted and tested algorithm can reserve judgment and focus it where it belongs — on performance and potential.