The Takeaway
Artificial intelligence (AI) has the potential to revolutionize the hiring process, making it more efficient and effective. However, the integration of AI into the hiring process has potential legal consequences that must be carefully reviewed to ensure compliance with employment laws.
Introduction
In recent years, integrating AI into the hiring process has become a sought-after tool. AI tools (ranging from resume screening software to advanced algorithms that assess candidates' fit for a role) offer promises of increased efficiency, reduced bias, and cost savings. However, using AI in the hiring process also raises critical legal and ethical concerns. Here’s a look at how AI is transforming recruitment and some potential legal pitfalls employers should be aware of.
AI in the Hiring Process
Before the arrival of AI, recruiting and hiring new employees required humans to manually sift through and sort hundreds of resumes for each job opening. This essential yet tedious task began its transformation in the late 20th century with the introduction of AI to read resumes and extract keywords, employment history, and educational qualifications. Further advancements in AI have led to its use in pre-employment testing, reference and social media checks, and video interviewing software. Today, AI is making hiring decisions such as rejecting candidates or advancing them to the interview stage.
AI advocates argue that AI tools reduce both time and cost while minimizing human implicit biases during the initial recruitment stage. However, despite these potential advantages, AI is not immune to algorithmic bias in hiring decisions, which can lead to claims of employment discrimination.
Possible Legal Challenges to Using AI in Hiring
Hiring algorithms can exhibit bias because of the inherent limitations of the data it draws from. Data augmentation, a technique that seeks to diversify the data, can help address these biases. Hiring algorithms continuously identify patterns as new data is introduced. Therefore, ongoing data augmentation is essential to maintain diverse data. However, if this need for data augmentation goes unnoticed or fails to effectively mitigate bias, then job applicants could present legal claims of discrimination based on race, age, gender, or disability.
A Case of Alleged AI Discrimination
In 2023, job applicant Derek Mobley brought an employment discrimination action against Workday, Inc., a third-party software vendor. Mobley is an African American male over the age of 40 who suffers from anxiety and depression. He alleges that the algorithmic decision-making tools used to screen applicants in the hiring process discriminated against him and similarly situated job applicants on the basis of race, age, and disability in violation of Title VII, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Amendments Act (ADA).
Mobley claimed he was rejected from over 100 jobs listed by various employers that use Workday’s software—despite the fact he was qualified for each of those 100 positions. He noted he received rejection emails sent in the early morning at times that were outside regular business hours. Additionally, he alleges that the tools used were based on biased training data and information from plyometrics and personality tests, things on which applicants with mental health and cognitive disorders perform more poorly.
Workday brought a motion to dismiss the complaint, arguing that Mobley failed to state a claim. The District Court denied Workday’s motion to dismiss, holding that the amended complaint adequately alleged that Workday is an agent of their client-employers and thus falls within the definition of an employer under Title VII, the ADEA, and the ADA. Mobley v. Workday Inc., 23-cv-00770-RFL (N.D.CA 2024). Additionally, the Court found Mobley plausibly alleged that Workday’s customers delegated traditional hiring functions—including rejecting applicants—to the algorithmic decision-making tools provided by Workday. The Court further stated, “Given Workday’s allegedly crucial role in deciding which applicants can get their ‘foot in the door’ for an interview, Workday’s tools are engaged in conduct that is at the heart of equal access to employment opportunities.”
This case highlights the potential for AI to unintentionally discriminate against candidates. It further indicates that companies that delegate traditional hiring functions to algorithmic decision-making tools may face potential legal exposure for even unintentional discrimination.
- Associate
Fiona Phillips is committed to delivering exceptional legal services and fostering strong client relationships. She anticipates potential issues to minimize client exposure and works toward achieving favorable outcomes in an ...