
The EEOC Quietly Deleted Its AI Guidance. Every Employer Using AI in Hiring Needs to Know.
Two-Sentence Summary
The EEOC has removed all of its AI-specific employment guidance pages — including the Algorithmic Fairness Initiative it launched in 2022 — leaving employers with no unified federal standard for evaluating AI in hiring. Three states (Texas, Colorado, Illinois) have filled the gap with their own laws, but they use different legal standards, and no two of them agree on what discrimination actually requires.
We went to the EEOC's AI guidance page this week during routine research. It returned a 404. We tried the Algorithmic Fairness Initiative page. Also a 404. We tried the adverse impact guidance on AI and employment selection. 404. We tried eeoc.gov/ai. 404.
Every piece of AI-specific employment guidance the EEOC published is gone. We verified this on March 21, 2026. The only thing still standing is one narrowly scoped document about AI and disability law. The rest has been deleted.
This is not a small administrative change. If your company uses AI in hiring, performance reviews, or any other employment decision — and if you built any part of your compliance program on the assumption that there was a federal framework governing this — you need to understand what just happened.
What We Found
The EEOC's dedicated AI page at eeoc.gov/artificial-intelligence now returns a 404 error. The Algorithmic Fairness Initiative page, which the agency launched in 2022 to formally study AI's impact on employment decisions, returns a 404. The technical guidance document titled "Select Issues: Assessing Adverse Impact in Software, Algorithms, and AI Used in Employment Selection" — the closest thing to a federal technical standard for evaluating AI hiring tools — returns a 404.
One page survives: "Artificial Intelligence and the ADA", which addresses how AI intersects with disability discrimination under the Americans with Disabilities Act. That page links to 2022 guidance on the use of AI in job applicant assessment under the ADA.
That is the entirety of what remains. One document. One protected class. No adverse impact framework. No guidance on race, sex, age, national origin, or any other class protected under Title VII or the ADEA.
The federal AI employment compliance floor has been removed.
What Was There Before
The EEOC's Algorithmic Fairness Initiative was launched in 2022 with explicit recognition that AI tools were reshaping how employers make decisions and that existing civil rights law applied to those tools.
The initiative produced real, usable guidance. The adverse impact document explained how employers and vendors should evaluate whether an AI hiring tool was producing discriminatory outcomes — not just in intent, but in effect. It explained how to calculate selection rates across demographic groups, what statistical thresholds signal a compliance problem, and how employers should assess their exposure when they rely on third-party AI products.
This was the federal anchor for employer AI compliance programs. Many employers used it as the starting point for vendor due diligence. Many compliance policies cited it. Many legal teams organized their AI governance frameworks around its technical framework.
That framework is now gone. It did not sunset. It did not expire. It was deleted.
The surviving ADA guidance is worth reading if AI intersects with disability-related accommodations in your hiring process — it covers that narrowly important topic. But it does not address the broader question of whether an AI tool discriminates on the basis of race, sex, or age. For that, there is now no federal guidance at all.
If your business uses AI in hiring and you want to understand what these tools mean for your existing state-law exposure, our post on why your hiring software probably counts as AI under state law is a good place to start.
Why This Matters Now
The timing could not be worse for employers trying to build a coherent compliance posture. Three state AI employment laws took effect between January and June 2026 — Texas TRAIGA (HB 149), Colorado SB 24-205 (C.R.S. § 6-1-1701 et seq.), and Illinois HB 3773 (Illinois Human Rights Act, as amended).
Each of these laws imposes obligations on employers who use AI in employment decisions. Each is real, enforceable law. And each defines the problem — and the solution — differently.
This is the central issue. Before these laws existed, an employer could look to the EEOC's adverse impact framework as a single federal standard and build a compliance program around it. Run your AI tools through an adverse impact analysis, document the results, fix the problems. That logic held across jurisdictions because federal civil rights law applied everywhere.
That logic still holds for the ADA. But for everything else, the federal anchor is gone.
The Standards Gap
The three state laws that have moved into the federal vacuum do not agree on what discrimination means in the context of AI.
Illinois treats AI hiring discrimination as a civil rights violation under the Illinois Human Rights Act (775 ILCS 5/2-102). The standard is disparate impact — it is a violation if an employer uses AI "that has the effect of" discriminating on the basis of a protected class. Intent is irrelevant. Outcome is what the statute looks at. This is closest to what the EEOC's removed guidance contemplated. For the full breakdown of what Illinois requires, see our Illinois HB3773 compliance guide.
Texas takes a fundamentally different position. Under TRAIGA Sec. 552.056 (HB 149), AI discrimination in employment requires proof of discriminatory intent. Disparate impact — the fact that an AI tool produces outcomes that disproportionately harm a protected class — is explicitly insufficient to establish a violation on its own. This is a meaningfully lower standard for employers in terms of liability exposure, but it is also a completely different legal theory than what the EEOC's adverse impact guidance was built around. If you operate in Texas, your compliance strategy looks different than it does in Illinois. See our Texas TRAIGA compliance guide for the full picture.
Colorado uses a third approach — a "reasonable care" standard for deployers of high-risk AI systems (C.R.S. § 6-1-1701 et seq.). Deployers must use reasonable care to protect consumers from algorithmic discrimination in consequential decisions, including employment. This is closer to a negligence framework: you can be liable not because you intended to discriminate, and not purely because of a discriminatory outcome, but because you failed to take reasonable precautions. The EEOC AI hiring compliance documents and Colorado SB 24-205 compliance package both address what "reasonable care" documentation looks like in practice.
Three states. Three legal theories. No federal baseline to unify them.
An employer using AI hiring tools in all three states now needs to simultaneously satisfy an effects test (Illinois), avoid evidence of intent (Texas), and demonstrate reasonable care (Colorado). These standards are not necessarily incompatible, but they require different documentation, different audit approaches, and different vendor conversations.
The EEOC's adverse impact framework would have provided at least a portable technical methodology — something that said: here is how you evaluate whether an AI tool discriminates, here is what the math looks like, here is what a defensible audit produces. That methodology is gone from the federal level. The state laws reference their own standards but do not replace the technical depth of the EEOC's technical resource.
What Employers Should Do
The absence of federal guidance is not a reason to stop building a compliance program. It is a reason to build one that is portable across the state standards that now fill the gap.
Document everything about every AI tool you use in employment decisions. What the tool does, what data it uses, what outputs it produces, and how those outputs influence actual decisions. This documentation is the foundation under every state standard — it is what "reasonable care" looks like in Colorado, it is what Illinois requires before a violation can be assessed, and it is what any employer would need to defend against a discrimination claim under any standard.
Conduct vendor due diligence in writing. Ask every AI vendor what bias testing they perform, what demographic data they use, what their impact ratio results show, and what they will provide if a regulator asks. Get the answers in writing. Our AI bias audit template includes a vendor assessment framework built around the disclosure questions that state laws are most likely to surface.
Know which state laws apply to you. This is not a question of where your company is headquartered. Illinois's law applies if you have employees in Illinois. Texas's law applies to covered deployments in Texas. Colorado's law applies to consequential decisions affecting Colorado consumers. If you operate in multiple states, you have multiple obligations. Our post on building an AI governance framework that works across state laws walks through how to structure that.
Use the NIST AI Risk Management Framework as a portable technical anchor. The NIST AI RMF is not law — it is a voluntary framework published by the National Institute of Standards and Technology. But it is technically rigorous, it is portable across jurisdictions, and it is the closest thing to a neutral federal technical standard that still exists. Several state laws reference NIST standards as a relevant consideration. Building your AI governance program around the NIST RMF means you have a defensible methodology to point to regardless of which state's standard is being applied.
Get state-specific compliance documents in place now. Illinois requires employer notice to employees and applicants that AI is being used in employment decisions (775 ILCS 5/2-102). Colorado requires impact assessments for high-risk systems (C.R.S. § 6-1-1701). Texas requires documentation demonstrating that any AI use was not intentionally discriminatory. These are distinct documents with distinct purposes. Our EEOC AI hiring compliance package and Colorado SB 24-205 documentation kit are built around what each statute specifically requires.
The federal guidance vacuum is real. What fills it is the combination of your documentation, your audit records, your vendor agreements, and your state-law compliance materials. That combination is what protects you when there is no unified federal standard to point to.
Sources — Verified against primary sources during research on March 21, 2026:
- EEOC — Artificial Intelligence and the ADA — the only surviving EEOC AI guidance; all other AI pages verified as returning 404.
- Texas HB 149 (TRAIGA) Enrolled Text — Sec. 552.056 intent standard verified.
- Colorado SB 24-205 — C.R.S. § 6-1-1701 et seq. reasonable care standard verified.
- Illinois HB 3773 / Public Act 103-0804 — Illinois Human Rights Act amendment, disparate impact standard verified.
What the EEOC Guidance Actually Covered
4 facts
- [1]EEOC — Artificial Intelligence and the ADA (Surviving Guidance) (opens in new tab)
- [2]Texas HB 149 (TRAIGA) Enrolled Text — 89th Legislature (opens in new tab)
- [3]Colorado SB 24-205 — Consumer Protections for Artificial Intelligence (opens in new tab)
- [4]Illinois HB 3773 — Public Act 103-0804 (Illinois Human Rights Act Amendment) (opens in new tab)
- [5]NCSL — Artificial Intelligence 2025 Legislation (opens in new tab)
- [6]NCSL — Artificial Intelligence 2024 Legislation (opens in new tab)
Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.
More from the blog
AI governance framework checklist: what every enacted state law actually requires
You're HIPAA-Compliant. That's Not Enough Anymore.
Get your compliance documentation done
Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.
Browse Compliance Packages