The concerning implications of AI recruitment agents

Many recruitment software tools are embracing AI interview agents, and I think this could have concerning implications.

AI interview agents are interactive voice chat bots that jump on a call with a candidate and autonomously interview them. They are seen in many of the AI-native recruitment tech companies that have emerged in the last couple of years, but also in some of the incumbents like Workday, who bought Paradox to this end last year.

These are being tested out in the real world, too: I personally know of a PhD-level candidate applying to a multinational in the life sciences who experienced this, and have read many other credible anecdotes on the internet (most of them overwhelmingly negative).

I can foresee the argument that this will be good for everyone involved. Recruiters (in-house or agencies) will be more efficient, pipelines will flow more smoothly, time-to-hire / time-to-placement will decrease. The optimistic argument would go on that this ultimately would also favor job seekers with better information loops and quicker turnarounds.

But the happy, unconstrained vision very rarely comes to fruition, and I think it is more likely that we will see severe limitations and trade-offs. 

On the limitations: the technology may prove useful in a restricted number of contexts, such as high-volume recruitment, and only at the top of funnel. I am highly skeptical that interview agents will have any time soon the finer reactive capabilities needed to conduct interviews further down the funnel - or even at the top of the funnel in high-stakes contexts like Executive Search.

On the trade-offs: companies using AI interview agents risk further alienating candidates who have were already very unhappy with the hiring process in the best of times (that is, in a job seeker’s market). Weird reactive behaviors may emerge, such as candidates sending their agent to meet the recruiter agent, a scenario in which complexity and chaos (and hilarity) would ensue, rather than any kind of stable inter-agent equilibrium.

Ethically, I think there a few potentially grave pitfalls. Once configured, is the agent accurately and uniformly capturing the requisite information, or is it randomly leaving candidates in the lurch? Is it maybe actively discriminating against certain protected characteristics in-interview? Is it feeding back truthful and honest information to the candidate? Similar questions could be extended beyond the interview into the post-processing in the data.

Many would also fold the loss of human touch into the ethical dimension, perhaps rightly so, but that issue may be hard to judge outside of a strictly personal and utilitarian lens. It’s hard to universalize one’s preferences in technology, and it’s becoming an order of magnitude harder with AI, where everyone’s subjective experience varies wildly. 

But personally, as things stand, I don’t think I would appreciate being interviewed by an AI bot at all.

Would you?