AI in Hiring: What Do New Federal Policies and Legal Challenges Mean for Employers?
In the rapidly evolving landscape of employment practices, artificial intelligence (AI) has emerged as a transformative tool, offering efficiencies and data-driven insights. However, the integration of AI in hiring processes brings complex legal challenges, particularly concerning disparate impact claims. Recent developments, including shifts in federal policy and the seminal ruling in Mobley v. Workday, Inc., underscore the need for employers to stay informed and proactive.
Federal Policy Shifts: A New Approach to Disparate Impact
Under the leadership of Attorney General Pam Bondi, the Department of Justice (DOJ) signaled a major shift in how disparate impact claims are addressed. Disparate impact theory has historically been used to challenge employment practices that, while neutral on their surface, disproportionately affect individuals based on race, gender or other protected characteristics. The DOJ's recent directives aim to narrow the use of this theory. This policy shift aligns with broader executive orders that seek to reduce race- and sex-based preferences in compliance with federal civil rights laws.
The DOJ maintains that statistical disparities alone do not equate to unlawful discrimination. It argues that disparate impact theory can unfairly lower the threshold for proving discrimination, potentially leading to decisions based on diversity quotas rather than qualifications. As a result, the risk of federal enforcement for disparate impact claims, particularly those involving AI, may decrease. However, state-level enforcement and private litigation remain viable avenues for addressing potential biases in AI-driven employment decisions.
Mobley v. Workday: A Case Study in AI and Liability
The case of Mobley v. Workday, Inc. serves as a pivotal example of the legal complexities surrounding AI use in hiring. A job applicant alleged that Workday's AI-powered screening tools discriminated against him based on race, age and disability, in violation of Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act. He claimed that despite his qualifications, he was repeatedly rejected for positions at companies using Workday's tools, often receiving automated rejections outside of business hours.
The court's decision to allow the claims to proceed under an "agent" theory of liability marks a significant development. This theory posits that AI vendors, when acting as agents of employers, could be directly liable for discriminatory outcomes if their tools significantly influence hiring decisions. While the court dismissed claims that Workday acted as an "employment agency," the acknowledgment of its role as an agent highlights the nuanced legal questions raised by the use of AI technologies in hiring.
The case has garnered significant attention due to its potential to set precedent for AI vendor liability, with the EEOC (during the previous presidential administration) filing an amicus brief supporting the plaintiff's theory. The court's decision underscores the importance of understanding how AI tools are implemented and their potential impact on employment practices.
Navigating Legal Risks in AI-Driven Hiring
For employers utilizing AI in hiring, proactive steps are essential to ensure compliance with anti-discrimination laws:
- Conduct regular audits: Evaluate AI systems for potential biases, ensuring that the data used does not perpetuate historical inequalities. Regular audits can help identify and mitigate risks before they result in legal challenges.
- Stay informed: Monitor legal and regulatory developments related to AI and employment discrimination. Engaging with legal experts can help organizations navigate changes and understand their implications.
As AI continues to reshape employment practices, understanding its legal implications is crucial for both employers and vendors. The recent federal policy shifts and the Mobley v. Workday case highlight the need for vigilance in ensuring AI tools are used responsibly and in compliance with anti-discrimination laws. By implementing robust audits and maintaining transparency, employers can leverage AI's benefits while safeguarding against discrimination claims.
Contact Chris Bach or any member of Phelps’ labor and employment or artificial intelligence (AI) teams if you have questions or need compliance advice and guidance.