
Workday AI Hiring Lawsuit Could Reshape Employer Liability
Two-Sentence Summary
A federal court has allowed discrimination claims against Workday's AI screening tools to proceed on an unprecedented 'agent' liability theory, with conditional certification covering what could be hundreds of millions of job applicants. The case, Mobley v. Workday, Inc., is the first major test of whether AI vendors — not just employers — can be held directly liable for algorithmic hiring bias under federal anti-discrimination law.
A federal court has ruled that discrimination claims against Workday's AI hiring tools can proceed — not just against the employers who used them, but against Workday itself. The case is Mobley v. Workday, Inc., and it names every employer using an AI screening tool as a potential defendant in the same type of claim. If the court's theory holds, your vendor contract does not protect you.
Here's what you need to understand about this case before your next hiring cycle.
Case Overview
Derek Mobley filed a class action complaint on February 21, 2023, in the U.S. District Court for the Northern District of California against Workday, Inc. — Case No. 3:23-cv-00770-RFL, assigned to Judge Rita F. Lin.
Mobley alleges that Workday's AI-powered applicant recommendation system — used by employers to score, sort, rank, and screen job applicants — unlawfully discriminates on the basis of race (African American), age (over 40), and disability. The claims arise under Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), the Americans with Disabilities Act (ADA), and California's anti-discrimination laws. Mobley applied to over 100 positions through employers using Workday's platform and was rejected without interviews each time. He alleges the AI tool's design reflected employer biases and relied on biased training data that produced systemic exclusions across protected categories.
What makes this case unusual is that Workday was not Mobley's employer or prospective employer. Workday is a third-party HR technology vendor whose software is used by thousands of other companies to make hiring decisions. The case puts the vendor — not just the employers — in the defendant seat.
Key Rulings
The case has cleared two significant procedural hurdles.
On July 12, 2024, the court denied Workday's second motion to dismiss. The court rejected the argument that Workday qualifies as an "employment agency" under federal law — but it allowed Mobley's disparate impact discrimination claims to proceed on an "agent" theory of liability. Under this theory, Workday may be held directly liable as an agent of the employers who use its software to make hiring decisions. The court dismissed Mobley's intentional discrimination and aiding-and-abetting claims under California law but preserved the core federal disparate impact claims (GovInfo court filing, May 16, 2025 order referencing prior rulings).
On May 16, 2025, the court granted conditional certification of the ADEA collective action claims. The court found that Mobley sufficiently alleged a unified policy — Workday's AI applicant screening system — affecting all putative class members. Workday argued that different employer-clients' varied use of the tools and differing applicant qualifications made collective treatment inappropriate, but the court ruled those differences were immaterial for certification purposes (GovInfo court order).
An October 2, 2025, order directed the parties to prepare a dissemination plan for notifying potential opt-in plaintiffs. As of March 2026, the case remains active, with discovery ongoing and the last known filing dated March 27, 2026.
The 1.1 Billion Applications Figure
During the conditional certification proceedings, Workday disclosed in court filings that approximately 1.1 billion applications were rejected using its software tools during the relevant time period (from September 24, 2020, to the present). This means the potential ADEA collective could number in the "hundreds of millions" of job seekers — making it potentially one of the largest collectives ever conditionally certified in an employment discrimination case (GovInfo court order, May 16, 2025).
The Agent Liability Theory
The central legal innovation of this case is the "agent" theory of liability. Under Title VII, the ADEA, and the ADA, the definition of "employer" includes "any agent" of the employer. Mobley argues that when employers delegate hiring decisions to Workday's AI screening system, Workday acts as an agent of those employers and can therefore be held directly liable for discriminatory outcomes.
The court agreed this theory was at least plausible enough to survive dismissal. This matters because, until this ruling, it was generally assumed that AI vendors occupied a legal gray zone — they might bear contractual responsibility to their clients, but direct liability to the job applicants screened by their tools was an open question. If this theory is ultimately upheld, it would establish that AI vendors who design and operate screening tools used in hiring decisions can be sued directly by affected applicants under federal anti-discrimination law.
The EEOC's Involvement
The U.S. Equal Employment Opportunity Commission (EEOC) filed a 21-page amicus curiae brief on April 9, 2024, in support of plaintiff Mobley (full brief PDF). The brief, signed by EEOC General Counsel Karla Gilbride, advanced three distinct theories of liability:
Employment agency theory: The EEOC argued that Workday plausibly operates as an "employment agency" because it engages to a significant degree in screening and referral activities traditionally associated with employment agencies (Brief at pp. 4–10).
Indirect employer theory: The EEOC argued Workday is an "indirect employer" because it exercises significant control over applicants' access to employment opportunities. The relevant inquiry, per the EEOC, is not whether Workday controls employers' day-to-day operations but whether it can control or interfere with applicants' access to those employers (Brief at pp. 10–12).
Agent theory: The EEOC argued Workday is an "agent" of employers because employers have delegated hiring decision authority to Workday's system. The brief cited case law establishing that where an employer delegates "functions traditionally exercised by an employer" to a third party, that third party can be found to be an employer by virtue of the agency relationship. The EEOC cited its own prior guidance stating that employer's agents "may include entities such as software vendors, if the employer has given them authority to act on the employer's behalf" (Brief at pp. 12–14).
The court ultimately allowed the agent theory to proceed while dismissing the employment agency theory. But the EEOC's endorsement of all three theories signals that the federal enforcement agency views AI vendor accountability as squarely within the reach of existing civil rights law.
What This Means for Employers Using AI Hiring Tools
If you use any AI screening tool — not just Workday's — this case affects you. The agent liability theory does not stop at Workday. It applies to any vendor whose system performs a hiring function on your behalf. If that system produces discriminatory outcomes, you may face claims as the principal, and your vendor may face them too. The ADEA conditional certification here means the plaintiff class can number in the hundreds of millions. That is not a small-claims problem.
AI adoption in HR tasks climbed to 43% in 2025, up from 26% in 2024 — the pool of employers potentially facing this exposure is growing rapidly. Workday alone reports more than 10,000 customers worldwide. The companies using AI in hiring decisions are no longer edge cases; they are the majority.
Practically, employers should be conducting or requiring bias audits of their AI hiring tools, documenting their selection and monitoring processes, and reviewing vendor contracts to understand who bears responsibility for discriminatory outcomes. Our AI Bias Audit Report Template provides the framework for conducting these audits. The EEOC AI Hiring Compliance Kit covers the full Title VII and UGESP documentation requirements. And the Vendor AI Due Diligence Kit gives you the questionnaire and contract addendum templates to hold vendors accountable.
What This Means for AI Vendors
For AI vendors in the HR technology space, this case represents a fundamental shift in potential liability. If upheld, the agent theory means that vendors who build, operate, or maintain AI hiring tools can face direct lawsuits from the applicants those tools screen — even though the vendor has no direct employment relationship with those applicants.
This changes the vendor relationship in several ways. Vendors can no longer assume that contractual indemnification clauses with employer-clients fully insulate them from liability. They may need to proactively conduct and publish disparate impact analyses. They face pressure to provide audit trails and explainability for algorithmic decisions. And they may need to redesign their business models to account for the possibility that they are, legally speaking, agents of every employer that uses their platform.
Other Active AI Employment Discrimination Cases
Mobley is not the only case in this space. The EEOC's settlement with iTutorGroup in September 2023 was the first federal AI hiring discrimination case to reach resolution. In EEOC v. iTutorGroup, Inc., Civil Action No. 1:22-cv-02565 (E.D.N.Y.), the EEOC alleged that iTutorGroup programmed its tutor application software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older, rejecting more than 200 qualified applicants. The company paid $365,000 to settle the suit.
In March 2025, the ACLU filed complaints against Intuit and HireVue related to alleged AI-driven hiring discrimination. These complaints remain in earlier stages. Together with Mobley, these cases represent a growing body of litigation testing whether existing civil rights frameworks can adequately address algorithmic bias in employment.
NYC Local Law 144: The Comptroller's Audit
On December 2, 2025, the New York State Comptroller released an audit of NYC's enforcement of Local Law 144 of 2021, the first municipal law in the United States requiring bias audits of automated employment decision tools (AEDTs). The audit (Report No. 2024-N-6) covered the period from July 2023 through June 2025.
The findings were stark. DCWP received only two AEDT-related complaints during the entire audit period. DCWP surveyed the websites and bias audits of 32 companies and found just a single instance of non-compliance. However, the Comptroller's office reviewed the same 32 companies and identified at least 17 instances of potential non-compliance — a dramatically higher number that suggests DCWP's review was not using the formal procedures created by the NYC Office of Technology and Innovation.
The audit concluded that DCWP's complaint-based enforcement model is fundamentally ineffective. The complaint intake process through the 311 system was not properly routing AEDT complaints to DCWP. No additional stakeholder education or outreach had been performed since the law took effect. And DCWP did not consult with OTI's technical experts when making determinations about AEDT use, despite lacking internal technical expertise.
The EEOC's full 21-page amicus brief is publicly available on the [EEOC's website](https://www.eeoc.gov/sites/default/files/2024-04/Mobley%20v%20Workday%20NDCal%20am-brf%2004-24%20sjw.pdf). The three-theory analysis above is drawn directly from the brief text. The July 12, 2024 ruling description is sourced from the May 16, 2025 [GovInfo order](https://www.govinfo.gov/content/pkg/USCOURTS-cand-3_23-cv-00770/pdf/USCOURTS-cand-3_23-cv-00770-0.pdf) referencing prior rulings.
Agent Liability in Employment Discrimination Law
When you apply for a job through a company's website, the software screening your resume might not be built or operated by that company at all. It might be built by a third-party vendor like Workday. So when that software discriminates — rejecting applicants based on race, age, or disability — the question becomes: who is legally responsible?
Federal anti-discrimination laws like Title VII, the ADEA, and the ADA define 'employer' broadly to include any 'agent' of the employer. Historically, this language was understood to apply to individual managers or HR staff who carried out discriminatory decisions on behalf of the company. A hiring manager who refused to interview older applicants was an agent of the employer and could create liability for the company.
The Mobley case asks whether a software vendor can be that agent. The argument is straightforward: when an employer subscribes to Workday's platform and delegates the screening of job applicants to Workday's AI system, Workday is performing a function that the employer has authorized and controls. The employer sets the criteria. The vendor's system applies those criteria (and potentially adds its own biases from training data). The applicant is rejected. The functional relationship looks a lot like an agency relationship.
The court in Mobley did not definitively rule that Workday is an agent — it ruled that the theory was plausible enough to survive a motion to dismiss. But even this preliminary endorsement has enormous implications. It means that AI vendors cannot simply disclaim responsibility for outcomes on the grounds that they are merely selling a tool. If the tool is performing an employer function, the vendor may be performing an employer function too. The EEOC's amicus brief explicitly supported this reading of the law, giving it the weight of the federal enforcement agency's formal legal position.
4 facts
6 references
- [1]EEOC Amicus Brief Listing — Mobley v. Workday, Inc. (opens in new tab)
- [2]EEOC Amicus Brief (Full PDF) (opens in new tab)
- [3]GovInfo — Court Order (May 16, 2025, conditional certification ruling) (opens in new tab)
- [4]EEOC — iTutorGroup Settlement Press Release (opens in new tab)
- [5]NY State Comptroller — LL144 Enforcement Audit (Dec. 2, 2025) (opens in new tab)
- [6]NYC DCWP — Automated Employment Decision Tools (LL144) (opens in new tab)
Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.
More from the blog
Colorado's AI Law Takes Effect in 91 Days. Here's What It Requires.
91 days until Colorado's AI law takes effect. No amending bill has been introduced. The legislature has failed to revise the law four times. The deadline is real.
Colorado AI Compliance for HR Software Companies: What SB 24-205 Means for Your Product
HR software that screens candidates, scores performance, or ranks employees is classified as high-risk AI under Colorado's law. The June 30, 2026 deadline applies to both the companies that build these tools and the HR teams that use them.
Do I Need AI Compliance? A Decision Framework for Every Business Using AI
Not sure if AI compliance applies to your business? Walk through four questions — and know exactly which laws apply, which documents you need, and where to start.
Operating in Multiple States? Here's How AI Compliance Stacks Up Across 15 Jurisdictions
Colorado, California, Texas, Illinois, and NYC all have active AI laws — and they don't all require the same things. If you operate in multiple states, here's what applies to you and why.
Oregon Consumer Privacy Act: What Your Business Needs to Know About AI Profiling Requirements
Oregon's privacy law has been in effect since July 2024, requires data protection assessments for AI profiling, and flatly prohibits processing personal data of consumers under 16 for targeted advertising or data sales — a protection not found in most other state laws. The 30-day cure period effectively expired for most businesses on January 1, 2026 (Oregon Laws 2025, c.417).
What Is an AI Impact Assessment? The Document Every State Law Now Requires
Colorado, California, and Illinois all require some version of an AI impact assessment — but they don't call it the same thing or require the same format. Here's what every version has in common, and what each state specifically demands.
What Is a High-Risk AI System? A Plain-Language Guide for Business Owners
Three different laws. Three different definitions of 'high-risk AI.' If your business uses AI to make decisions about people, here's how to figure out which rules apply to you.
The Federal Government Quietly Removed Its AI Hiring Guidance. Four States Are Writing Their Own.
The federal government removed every page of AI hiring guidance it ever published. Over a year later, the pages are still down. Four states wrote their own — and none of them agree.
AI governance framework checklist: what every enacted state law actually requires
Colorado, Texas, and Illinois all passed AI laws with deadlines in early 2026 — and none of them are identical. Here's the one compliance checklist that covers all three at once.
You're HIPAA-Compliant. That's Not Enough Anymore.
HIPAA protects patient records. It has nothing to say about whether the AI making decisions about those patients is fair. New rules are filling that gap — and they apply to you even if your HIPAA program is airtight.
The NIST AI Risk Management Framework: What It Is and Why Colorado Made It a Legal Shield
The US government published a free framework for managing AI risk — and Colorado's AI law turns following it into a legal shield. If something goes wrong with your AI, this is the document that shifts the burden of proof.
Texas TRAIGA (HB 149): What the Texas Responsible AI Governance Act Requires and How to Comply
Texas passed an AI law that applies to every business — no exemptions for small companies, no carveout for low-risk tools. It's already in effect, and a single uncurable violation starts at $80,000.
What Does AI Compliance Actually Cost a Small Business in 2026?
AI compliance can cost $49 or $50,000 — depending on what you actually need. Here's what each option costs in real numbers, so you can stop guessing and start budgeting.
AI Compliance Penalties by State: What Happens If You Ignore the Law
"Per violation" sounds like one fine. It isn't. Here's what the penalty math actually looks like state by state — and why the numbers can compound into company-ending territory fast.
AI and HIPAA: What Healthcare Businesses Must Do Now
If an AI tool touches patient data at your healthcare organization, HIPAA applies — and most vendor contracts aren't written to cover it. Here's what you need before you deploy.
EU AI Act Compliance Checklist: What US Businesses Need Before August 2026
Europe's AI law applies to US companies — even ones with no European office. If your AI is used by anyone in the EU, the deadline is August 2026 and the fines are calculated on your global revenue.
ISO 42001: The AI Certification Your Enterprise Clients Will Soon Require
Enterprise clients are starting to require ISO 42001 certification before they'll buy AI products — the same way they require SOC 2. Here's what the standard actually requires and why getting it early is a competitive advantage.
What Is an AI Bias Audit and Does Your Business Need One?
New York City requires an annual test of any AI hiring tool to check whether it's filtering out one group of people more than others. If you hire in NYC, this isn't optional — here's what the audit actually involves.
California Just Finalized Its AI Regulations. Here's What Your Business Actually Needs to Do.
California's AI rules are already in effect — and the agency enforcing them just handed out its largest fine ever. Here's what your business needs to do and when.
Colorado's AI Law (SB 24-205): What It Requires and When It Takes Effect
Colorado passed the most detailed AI law in the country. If your business uses AI to make decisions about people, here's what you need to have ready by June 30.
Get your compliance documentation done
Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.
Browse Compliance Packages













