AI Recruiting: Not Ready for Prime Time, or Just Inscrutable to Puny Human Brains?

AI Recruiting: Not Ready for Prime Time, or Just Inscrutable to Puny Human Brains?

Artificial Intelligence is great, but can it help you pick your next star candidate?

Recently, Reuters reported that Amazon quit using a secret AI-based talent recruitment tool due to apparent gender bias – when it was trained by the recruiters to hire the best candidates, it ended up rating men higher than women. Based on past hiring practices, they created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on previous candidates’ resumes. The technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured.”

The system taught itself that male candidates were preferable.

When this bias was corrected, it started spewing out what the team felt was random candidates, and was eventually shelved. They promised that artificial intelligence would not be used to hire candidates.

What went wrong? I’m sure that if someone would be able to assist beleaguered recruiters with the selection process, then what would be more fair and impartial than AI, right? Computers, not being human, have no bias, do they? They can be objective, right?

Those of you seasoned IT folks will recognize the term Garbage In, Garbage Out. The Amazon team started by training the AI by using past hires and the experiences of the recruiting team. This had the effect of imprinting the unconscious bias in the humans right onto the AI. Remember when Richard Daystrom imprinted his engrams on the M5 Ultimate Computer and how well that turned out? Of course, no one saw it until it was extracted and placed in a supposedly objective computer-based decision maker.

Computers may be objective, but their programmers aren’t. Unconscious bias seeps in, and until we possibly have computers coding themselves a few levels deep, we may never be able to eliminate bias in the results.

What’s most interesting to me is what’s not mentioned in the article: as this was code, could they not have “lifted the hood” to see what was causing the bias, and then tweaked the code to suit (in the same way Captain Janeway tweaked her virtual boyfriend on that Voyager episode)? Another reason why we need transparency from our AI. Not only do we need them to make better decisions, but we also need them to be able to explain why they made that decision.

I’m also curious about their reports of seeming “random” output of candidates once the AI was tweaked. Maybe it seemed random to the humans and the human programmers but made total sense to the AI. If it were me, I’d have interviewed some of those selected candidates to see if it had been able to unearth hidden gems in the rough. Maybe it did find great people in its mind, even though they seemed random to us.

Who knows, maybe the AI knew something about the quality of the candidates that us mere humans couldn’t see. As we start to use AI more and more throughout our lives, there may come the point where we realize that they may make better decisions than us puny humans with our limited intelligence.

We aren’t there yet but will be someday. There will be that moment when we realize that our machines can make better decisions than we can, and we should hope that if and when that happens, we let them make them for us. 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics