Changing up the interview process to improve diversity in the workforce is one of the top priorities amongst large tech companies looking to bulk up their staff.
Unfortunately, depending on the human mind to make impartial hiring decisions means dealing with the hidden biases we hope don’t see the light of day.
The idea of adding the robots to the decision making process is to screen applicants based on logic — finding the right person for the job, regardless of race, gender, background, and so on.
So, it kind of makes sense right? After all, artificial intelligence stands to be this beacon of equality, hiding demographics and matching candidates based on skill, before the humans get to them, equipped with harmful biases.
Complete guide to advancing your career
The future of an AI HR department
How is the technology being used today?
These days, robots are part of the hiring process, but generally relegated to the initial screening process. For example, you might get a phone interview with a company, but rather than talk to a person through the phone or video conferencing app. You’ll answer a series of questions for a robot, who later applies an algorithm to your answers to gauge compatibility.
How to prepare for a FaceTime job interview
Companies like HireVue are supplying technology to companies like Goldman Sachs, Under Armour, and Unilever, among others. The platform gives all applicants the same set of questions, which HireVue says helps eliminate biases.
Proponents of AI-based hiring say it’s a means of streamlining the hiring process. And, it’s easy to understand the appeal. The process saves companies time and allows them to sort through more candidates than ever.
Unilever has been using artificial intelligence to screen entry-level employees for nearly two years now. Their process screens candidates by asking them to play neuroscience-based games to measure traits like risk management or the ability to read contextual clues.
If applicants make it through the games phase, they’ll then continue onto the interview process — where they’ll record their answers to a series of interview questions. Interviews are not live. Instead, the AI analyzes each of the answers, scanning for keywords, body language, and intonation.
Should we teach robots to analyze body language?
A company called Pymetrics is the name behind the games and advertises their service as a way of “matching talent to opportunity, bias-free.”
How it works is, participating companies have existing employees play the game. Pymetrics analyzes the data to pick out specific trends, then builds algorithms that scan for likelihood to succeed. Prospective candidates, like in the Unilever example, play the games and match to opportunities.
What are the advantages?
On the employer side, there’s the element of convenience, of course. For Unilever, the company found that the AI enabled them to hire more nonwhite employees, as well as entry-level staff from a wider range of colleges than they did before adding the technology to their recruiting strategy.
Employers are also hoping that the process can reveal the smaller things that humans miss during the interview process.
According to a CNN Tech article , companies are looking toward pre-employment lie detector tools that help companies see when an interviewee is embellishing their qualifications. While honesty is typically the best policy, we have to say the facial scanning is a bit creepy.
The advantages on the employer side are clear — obviously, the streamlined method benefits recruiters and companies more than the person who needs a job.
But, there’s some evidence an AI-guided experience could have some positives for the job seeker, as well. For example, chatbots can help candidates navigate complex application instructions — and the potential to reduce bias promises greater ethnic and economic diversity in the workplace.
How chatbots are changing small businesses
But, there’s always a dark side
Many AI technologies are still in their early stages. So, the main risk for employers is that they’re looking at a solution that isn’t yet proven.
And, when you’re considering AI options that analyze candidates’ word choice, tone, and body language, you might be missing out on some awesome candidates that are — say, fidgety or uncomfortable while recording responses.
Cornell professor Solon Barocas spoke with the Washington Post and warned job hunters to be wary of any company who is using an algorithm in place of an actual hiring manager.
The professor highlighted Amazon’s failed experiment using an algorithm to evaluate job applications. In it, Barocas suggests companies should err on the side of caution when applying AI to these decisions that do, have significant effects on peoples’ lives.
Algorithms, of course, come with their own biases, as engineers bring their own baggage to the table during development.
So, last month, Amazon scrapped its AI recruiting tool , after it was revealed to have some sexist inclinations.
The reason for this is, U.S. tech companies are still struggling to close the gender gap among developers and other technical positions.
Amazon’s AI engine primarily trained by male developers, and had started penalizing women’s resumes. In fact, even the word, “women’s,” prompted the AI to downgrade resumes from applicants who attended women’s colleges or listed accomplishments like “captain of the women’s basketball team.”
That issue might be a relatively easy fix for the company, but there’s also the issue of masculine language and job applications. Meaning, there’s a chance that applications like HireVue hold some biases themselves, in spite of their efforts to eliminate the ugly side of recruiting.
For example, the technology was found to place higher value on “aggressive” words like captured or executed, more common on male engineers’ resumes.
This is what happens when you lie on your resume
The tool’s inherent gender bias wasn’t the only problem. Another major issue came in the form of data problems, with the AI firing off recommendations at random. Researchers found the tool was matching under-qualified candidates to positions that were far outside their pay grade.
Of course, that’s just one example. But, the fact that algorithms are scanning for job qualities based on existing top performers, it’s hard not to wonder if they’re scanning for the same old masculine power words and mannerisms—rather than aiming for diversity in a more subtle sense.
The Washington Post article points out that an algorithm trained to match candidates to top performers based on performance reviews is already set up to reinforce biases .
They give the example of a female leader given low marks on a performance review for being too assertive while being an aggressive seller might be seen as an asset in a different context.
The point is, artificial intelligence doesn’t replace the recruiter’s job. There will always be the need for the human hand in the decision-making process.
More about careers
Guide to finding jobs with LinkedIn ►
How to future-proof your career ►
How to use social media to find your next job ►
How to write the best resume in Google Docs ►