**Chatbots in Hiring: Do They Introduce Bias and Discrimination?**
*Concerns around Algorithmic Hiring Tools in Blue Collar Jobs*
In recent years, companies have increasingly turned to chatbots as a means to interview and screen job applicants, particularly for blue collar positions. However, the use of these algorithmic hiring tools has raised concerns about potential bias and discrimination. This article examines the experiences of job applicants with chatbot recruiters and explores the issues surrounding their use. It also discusses the potential ramifications for individuals with disabilities, non-English speakers, and older job seekers. Additionally, it addresses concerns regarding discrimination and bias in the training data used for these chatbots. Lastly, it explores recent legislation aimed at monitoring and regulating the use of automation in hiring tools.
**Glitchy Chatbot Recruiters: An Unexpected Hurdle for Job Seekers**
Amanda Claypool, a job seeker in Asheville, North Carolina, encountered numerous issues with chatbot recruiters when applying for a fast-food job. The McDonald’s chatbot recruiter, named “Olivia,” cleared her for an in-person interview but failed to schedule it due to technical difficulties. Similarly, a Wendy’s bot scheduled her for an interview, but it turned out to be for a job she couldn’t do. Finally, a Hardees chatbot directed her to interview with a store manager who was on leave, causing confusion and inconvenience. Claypool expressed frustration with the complexity of the process and ultimately found a job elsewhere. Companies like McDonald’s and Hardees did not respond to requests for comments, while Wendy’s emphasized the “hiring efficiencies” provided by their chatbot.
**The Rising Use of HR Chatbots in Various Industries**
Healthcare, retail, and restaurant industries are increasingly adopting HR chatbots to filter out unqualified applicants and schedule interviews with potential candidates. For example, companies like McDonald’s, Wendy’s, CVS Health, and Lowes utilize Olivia, a chatbot developed by Paradox, an Arizona-based AI startup. Similarly, L’Oreal relies on Mya, an AI chatbot developed by a San Francisco startup with the same name. These chatbots are primarily used for screening high-volume job applications such as cashiers, warehouse associates, and customer service assistants.
**Concerns With Chatbots: Bug-Prone and Bias Limitations**
Despite their increasing use, chatbots have limitations and potential issues. Many of these chatbots are rudimentary and ask straightforward questions, such as “Do you know how to use a forklift?” or “Are you able to work weekends?” However, they can be buggy and lack human support when technical issues arise. Furthermore, these chatbots rely on clear-cut answers, which may disadvantage qualified candidates who do not answer questions precisely as the language model expects. This can be problematic for individuals with disabilities, non-English speakers, and older job applicants. Aaron Konopasky, a senior attorney advisor at the U.S. Equal Employment Opportunity Commission (EEOC), believes chatbots may not provide alternative options for people with disabilities or medical conditions. Additionally, concerns about bias arise when chatbots evaluate factors such as response time, grammar, and sentence complexity. Without transparency, it becomes difficult to identify if bias played a role in a candidate’s rejection.
**Legislation and the Need for Transparency**
To address the potential bias and discrimination resulting from automation in hiring tools, government authorities have introduced legislation. For instance, New York City passed a law requiring employers to audit resume scanners and chatbot interviews for gender and racial bias. Similarly, Illinois requires employers using AI to analyze video interviews to notify applicants and obtain consent. These legislative measures aim to increase transparency and accountability in the use of automation in hiring practices.
**The Appeal of AI Screening Agents Despite Concerns**
Despite concerns, many companies are drawn to AI screening agents to reduce recruiting costs. Human resources departments are often seen as a cost center for companies rather than generating revenue. Therefore, chatbots offer an opportunity to alleviate recruiters’ workload. For example, Sense HQ provides companies like Sears, Dell, and Sony with AI chatbots that use text messaging to assist recruiters in managing a large number of applications. While this approach expands the range of viable candidates, it is crucial to note that AI should not solely make hiring decisions. The potential dangers arise when AI begins to make autonomous decisions without human involvement.
**RecruitBot: Using AI to Match Candidates to Current Employees**
RecruitBot is another AI-based recruiting tool that uses machine learning to sift through a vast database of 600 million job applicants. The goal is to help companies find candidates who share similarities with their current employees. However, this approach also raises concerns about bias and the potential to perpetuate the hiring of similar candidates. This disadvantage became evident in Amazon’s machine learning-based resume tracking system, which had to be removed due to gender bias.
**Personality Tests and their Irrelevant Nature**
Some companies have also turned to personality tests in their chatbot interviews, even though the questions may not be related to the job itself. This approach can result in potential discrimination based on personality traits unrelated to job performance. An example is Rick Gned, who experienced a personality quiz as part of a chatbot interview for a shelf-stacking job at Woolworths, an Australian supermarket. Gned found the process dehumanizing and expressed concern for minorities who make up a significant portion of the lower-income labor market.
**Providing Assurance for Job Applicants Through Chatbot Interaction**
For some job applicants, interacting with chatbots provides at least one positive outcome – the assurance that their application has been received. Job seekers often face the issue of not hearing back from employers after submitting numerous applications. Chatbot interactions can address this concern by providing immediate confirmation of application receipt.
In conclusion, the rise of chatbots in hiring practices has raised concerns about bias, discrimination, and lack of transparency. These algorithmic tools have the potential to disadvantage certain groups of candidates and perpetuate existing biases present in the training data. Legislation and efforts towards transparency aim to address these concerns. However, it remains crucial for companies to carefully consider the limitations and potential biases of chatbot recruiters in their hiring processes to ensure fair and inclusive practices.