In the realm of job seeking, technology has introduced chatbots as tools increasingly utilized by companies for the purpose of interviewing and assessing potential employees, particularly in the realm of blue-collar occupations. However, akin to previous instances of algorithmic hiring tools, there exists apprehension amongst both experts and job applicants regarding the potential biases inherent in these mechanisms.
Rashi Shrivastava, Forbes Staff Writer, reports on the experiences of Amanda Claypool during her job search in early June, in Asheville, North Carolina. Her pursuit was unexpectedly marred by the presence of malfunctioning chatbot recruiters.
Consider the following instances: McDonald’s chatbot recruiter, affectionately known as “Olivia,” successfully granted Claypool approval for an in-person interview, only to falter when it came to scheduling, citing technical glitches as the cause. In a similar vein, a Wendy’s chatbot managed to secure an in-person interview for Claypool, albeit for a position incompatible with her qualifications. Furthermore, a Hardees chatbot directed her to interview with a store manager on leave, leading to a far-from-seamless recruitment strategy.
Claypool recounted her experience, stating, “I arrived at Hardees, and they were visibly perplexed. The restaurant staff was ill-prepared to assist me, and it all seemed needlessly convoluted,” she remarked. Notably, both McDonald’s and Hardees declined to provide a response to queries, while a Wendy’s spokesperson extolled the bot’s capacity for “enhanced hiring efficiencies,” asserting that “innovation is an integral part of our DNA.”
HR chatbots, similar to those encountered by Claypool, are increasingly being integrated into diverse industries such as healthcare, retail, and the restaurant sector. They serve the dual purpose of filtering out unqualified applicants while scheduling interviews with potential candidates suitable for specific job roles. Among the companies deploying such technology are McDonald’s, Wendy’s, CVS Health, and Lowes, which utilize Olivia, a chatbot developed by the Arizona-based AI startup Paradox, valued at $1.5 billion. Other entities, like L’Oreal, rely on Mya, an AI chatbot originating from a San Francisco startup bearing the same name. (Paradox abstained from providing a response concerning Claypool’s experience.)
It is noteworthy that the majority of hiring chatbots, while efficient, do not match the sophistication and conversational prowess of contemporary chatbots like ChatGPT. Their primary application lies in screening candidates for positions that witness a high influx of applicants, such as cashiers, warehouse personnel, and customer service associates. Their queries tend to be rudimentary and straightforward, encompassing questions like, “Are you proficient in operating a forklift?” or “Can you commit to working weekends?” Nevertheless, as Claypool’s encounters highlight, these bots can be plagued by glitches, with no human recourse available. Furthermore, the straightforward responses they demand might lead to the automatic rejection of qualified candidates who do not adhere to the exact response patterns expected by these language models.
This poses a potential challenge for individuals with disabilities, those less proficient in English, and older job applicants. Aaron Konopasky, senior attorney advisor at the U.S. Equal Employment Opportunity Commission (EEOC), voices concerns that chatbots like Olivia and Mya may not offer individuals with disabilities or medical conditions alternative options or opportunities for reasonable accommodations. He emphasizes the value of human interaction in discussing such accommodations, a conversation that automated chatbots may inadvertently stifle due to their rigidity.
Jeremy Schiff, CEO and founder of RecruitBot, analogizes the chatbot experience to how Netflix suggests movies based on one’s preferences. Discrimination, however, remains a looming issue. Ingrained biases present in the data used to train AI models can perpetuate discrimination and bias in the tools they power. Pauline Kim, a professor specializing in employment and labor law at Washington University, raises concerns about potential bias creeping into the selection process based on factors like response time, language proficiency, and complexity of sentence structures. Such bias is challenging to detect when companies remain opaque about the reasons behind a candidate’s rejection.
Recent changes in the law aim to deal with these concerns. In early July, New York City passed a rule stating that companies using automated tools like resume scanners and chatbot interviews must check their systems for any unfair treatment based on gender or race. Similarly, in 2020, Illinois made a law saying that companies using AI for video interviews must tell applicants and get their permission.
But still, for businesses trying to make their hiring process more efficient and cheaper, AI screening agents seem like a good choice. Usually, the human resources department costs money instead of making it for the company. Matthew Scherer, who works on workers’ rights and technology at the Center for Democracy and Technology, thinks chatbots are a smart way to ease the work of recruiters.
One example is Sense HQ, which helps companies like Sears, Dell, and Sony. They offer AI chatbots that use text messages to help recruiters go through lots of job applications. They say they’ve helped around 10 million job applicants, giving companies a bigger pool of potential workers to choose from. But the co-founder, Alex Rosen, warns that we shouldn’t let AI make all the hiring decisions. He says AI should help humans decide, not replace them completely.
RecruitBot, on the other hand, leverages machine learning to sift through a vast database comprising 600 million job applicants, sourced from publicly available data and job marketplaces. Their objective is to assist companies in identifying job candidates who closely resemble their existing workforce. CEO and founder Jeremy Schiff likens their approach to the way Netflix recommends movies based on an individual’s preferences. Nevertheless, here too, the specter of bias looms large, as the over-representation of certain demographics in the training data can lead to unintended discrimination.
Urmila Janardan, a policy analyst at Upturn, a nonprofit organization dedicated to researching the impact of technology on opportunities, observes that some companies have also resorted to personality tests as a means of candidate screening. These assessments, at times, stray far from the realm of job-related inquiries. Candidates may find themselves rejected based on attributes such as gratitude and personality, which are tangential to the core job requirements.
For Rick Gned, a part-time painter and writer, a personality quiz was an integral component of a chatbot interview process for an hourly-wage shelf-stacking position at an Australian supermarket, Woolworths. The chatbot, developed by AI recruitment firm Sapia AI (formerly known as PredictiveHire), required Gned to provide responses ranging from 50 to 150 words for five questions. Subsequently, it analyzed his responses, seeking traits and skills aligning with the recruiter’s preferences. Concluding that Gned “adapts effectively to change” and tends to focus on the broader picture rather than minutiae, the chatbot advanced him to the next round of interviews. While Sapia AI does not impose stringent time limits for responses, it evaluates sentence structure, readability, and the complexity of vocabulary employed. Barb Hyman, CEO and co-founder of Sapia AI, clarified these evaluation criteria via email.
Gned found the entire process dehumanizing and concerning. While it did not significantly impact him personally, he expressed apprehension for minority populations that form a substantial portion of the lower-income labor market.
One anonymous job applicant did find a silver lining in conversing with a chatbot. Amidst the arduous task of submitting hundreds of job applications, the chatbot, at the very least, offered confirmation of receipt, providing a morale boost. Nevertheless, he acknowledged that enduring such interactions with a chatbot for every job application would prove exceedingly cumbersome.