How RecruitBot Mitigates Implicit Bias In Your Hiring Process
Last month, a story broke detailing Amazon’s development of an in-house machine learning-based tool to review and rank all of its inbound resumes. As you might imagine, we at RecruitBot were more than a little interested in this development. It wasn’t just that one of the biggest companies in the world had a reasonably similar idea to our product; it was that Amazon scrapped the idea because they couldn’t figure out a way to get their algorithm to stop being biased against women.
There was a flurry of thinkpieces in the wake of this news. Some were fair, some were not. But the most important and thought-provoking articles dealt with the larger implications of the implementation of AI, as we move further along the path towards an automated future. Are we just uploading our biases into the machines we rely on? Or was this simply a human-designed tool being poorly calibrated?
Thinking about these issues has always been, and will always be, at the top of our minds. In fact we think that the potential for AI inhibiting diversity is so important that we made it the subject of our very first blog post. So it’s probably a good time to revisit diversity and AI by examining what Amazon did wrong, what RecruitBot does right, and where we go from here.
Amazon’s implementation of its AI product might end up being a case-study one day in how to do AI wrong. It taught itself to demerit resumes with the word “women” in them. It was biased against candidates from women’s colleges. And even after engineers tweaked the algorithms to stop the bias against diverse candidates, the product gave results that were random, and useless. The project was scrapped.
It would be irresponsible to speculate on the causes of Amazon’s failure, but we can talk freely about the ways we believe RecruitBot avoids this kind of bias.
Our position at RecruitBot is that AI and machine learning are not a panacea for all social ills. As we said in our first blog post, no software will be able to solve systemic discrimination (such as the STEM pool being subject to bias against women). And of course humans, with all of their unconscious biases, will always have the final say in who gets interviewed, despite RecruitBot’s impartial recommendations. But that’s not really the point–to provide value, all RecruitBot has to do is be less biased than a human being when it comes to recommending candidates from diverse backgrounds.
We believe–and our internal audits confirm–that our models are much better at being inclusive than even the most unbiased recruiting team. There are a few reasons for this. Partially it’s because we don’t take into account things like names or pictures–which is one avenue in which implicit bias always seems to bleed in.
But the second effect is more subtle. Companies–especially tech companies–that want to end up hiring more diversity candidates usually have to increase their candidate pool to find enough suitable candidates, largely because those systemic issues mentioned above means that there are fewer diversity candidates than there ought to be. But that leads to a catch-22 that will always undo these good-faith efforts at increasing diversity: even the most well-meaning recruiters will despair at evaluating those thousands of new resumes, and fall back on shorthands like “I only have time to look at candidates from this top company or this top school”. It’s an understandable impulse that nonetheless reinforces bias against diversity candidates.
But RecruitBot is software, so it won’t ever get lazy or tired. In fact, it doesn’t even have any shorthands to fall back on. It breaks down each resume in your system into thousands of different individual signals, not just work history or education background. In other words, RecruitBot will enable your diversity efforts simply by giving every resume an impartial evaluation.
Buried at the end of that original Reuters report on Amazon’s failure is an interesting tidbit: even though Amazon abandoned its initial project, “a new [Amazon] team in Edinburgh has been formed to give employment screening another try, this time with a focus on diversity.” In other words, Amazon, like RecruitBot, recognizes that machine learning and AI are very powerful tools. It’s just a matter of trusting the humans creating those tools to make sure they’re being wielded on the side of social good.