
AI Compliance Penalties by State: What Happens If You Ignore the Law
Two-Sentence Summary
State and federal governments have enacted specific, escalating financial penalties for businesses that use AI in hiring, lending, insurance, and consumer decisions without proper documentation, bias audits, and consumer notices. This article breaks down the exact penalty amounts, per-violation structures, and enforcement mechanisms in New York City, Illinois, Colorado, Texas, California, and under federal civil rights law — so you can see precisely what's at stake if you don't comply.
One of the most reliable ways to build a compliance program that actually gets done is to understand exactly what happens if you don't. Abstract regulatory obligations are easy to defer. Concrete penalty amounts attached to real enforcement mechanisms are a different conversation.
This article does one thing: it tells you what the penalties are. Not estimates, not ranges derived from legal theory — the actual figures in the actual statutes, with citations you can verify. The law in this area is moving fast, and several states have enacted penalty structures in the last two years that most businesses outside of compliance and legal teams don't know about.
Every number in this article is drawn from enacted statute text. Where the law structures penalties in tiers, we've explained the tiers. Where the law creates per-day accrual of violations, we've noted that, because it's the part that turns a compliance oversight into a six-figure liability.
New York City — Local Law 144 (Automated Employment Decision Tools)
The law: NYC Administrative Code § 20-872, enacted as Local Law 144 of 2021. Enforcement began July 5, 2023.
Who it covers: Employers and employment agencies that use automated employment decision tools (AEDTs) for employment decisions affecting candidates or employees who work in New York City. An AEDT is any computational process using machine learning, statistical modeling, data analytics, or AI that produces a score, classification, or recommendation used to substantially assist or replace discretionary decision making in hiring.
Penalty structure:
- First violation: up to $500
- Each subsequent violation: up to $1,500
Per day and per use: The key detail is that a "violation" is not just owning a non-compliant tool. Using a non-audited tool to make a decision is a violation. Using it without providing the required ten-business-day advance notice to a candidate is a violation. Each use, each day, each individual affected by the tool's output creates a potential separate violation event. Enforcement is by the NYC Department of Consumer and Worker Protection (DCWP).
What triggers a violation: Failing to conduct an annual bias audit of the tool before use. Failing to publicly post the bias audit summary on the employer's website. Failing to provide candidates with notice at least ten business days before using the tool. Failing to provide current employees with similar notice.
The practical exposure: An employer using a non-audited AI hiring tool for six months while processing 500 applications per month is potentially looking at hundreds of violations at up to $1,500 each. DCWP has authority to investigate complaints filed by candidates or employees and to initiate investigations on its own.
Where to read it: NYC Admin. Code § 20-871, § 20-872; NYC DCWP Guidance
Illinois — Human Rights Act (HB3773 AI Provisions)
The law: 775 ILCS 5/2-102(L), added by Public Act 103-0804 (HB3773), effective January 1, 2026. Penalties are in 775 ILCS 5/8A-104.
Who it covers: Any employer with employees in Illinois that uses artificial intelligence in employment decisions — recruitment, hiring, promotion, renewal, training, discharge, discipline, or terms and conditions of employment.
Penalty structure (per violation, per aggrieved party):
- First violation: up to $16,000
- Second violation within five years: up to $42,500
- Two or more prior violations within seven years: up to $70,000
What "per aggrieved party" means: This is the number that changes the math. The penalties above are not per case — they're per individual person affected. If your AI hiring tool creates a discriminatory effect on 200 applicants and the Illinois Department of Human Rights (IDHR) treats each as a separate aggrieved party, the per-violation amount applies to each one. At the $16,000 first-violation rate, that is $3.2 million in potential penalties. At the $70,000 escalated rate, that is $14 million.
What triggers a violation: Using AI in an employment decision in a way that "has the effect of" discriminating against protected classes — including race, color, religion, sex, national origin, ancestry, age, disability, marital status, military status, sexual orientation, or pregnancy. Also failing to provide notice to employees that AI is being used in employment decisions (the exact notice format is being finalized by IDHR as of this writing).
Additional remedies: Beyond civil penalties, the Illinois Human Rights Commission can order: cease and desist, hiring or reinstatement with back pay, actual damages to complainants, attorney fees and expert witness fees, and any other relief necessary to make the complainant whole.
Enforcement: By IDHR through its charge-filing process. Any employee or applicant can file a charge. IDHR can also initiate investigations on its own.
Where to read it: 775 ILCS 5/2-102(L) (the AI provision); 775 ILCS 5/8A-104 (penalty schedule); IDHR Legislative Update
Illinois — Artificial Intelligence Video Interview Act
The law: 820 ILCS 42, effective January 1, 2020. Illinois's first AI employment law, predating HB3773 by six years.
Who it covers: Any employer that uses AI to analyze video interviews of job applicants.
What it requires: Employers must notify applicants that AI may be used to analyze the video interview. Employers must explain how the AI works and what characteristics it evaluates. Employers must obtain the applicant's consent before using AI analysis. Employers may not share video interviews except with those whose expertise is necessary to evaluate the candidate.
Penalty structure:
- Civil penalties of up to $500 per day for violations of the consent requirement
The interaction with HB3773: The Video Interview Act and HB3773's AI employment provisions are separate. Complying with one doesn't satisfy the other. If you use AI video interview tools in Illinois, you have obligations under both statutes. Both are actively enforceable today.
Where to read it: 820 ILCS 42
Colorado — SB 24-205 (Consumer Protections for Artificial Intelligence)
The law: C.R.S. § 6-1-1701 et seq., enacted May 17, 2024, effective June 30, 2026 (extended from February 1, 2026 by SB25B-004).
Who it covers: Developers and deployers of high-risk AI systems used to make or substantially factor into consequential decisions affecting Colorado consumers in employment, lending, insurance, housing, healthcare, education, or legal services.
Penalty structure: Colorado's SB 24-205 does not create its own standalone penalty schedule. Instead, violations are treated as deceptive trade practices under the Colorado Consumer Protection Act (C.R.S. § 6-1-105 et seq.).
The Colorado Consumer Protection Act provides:
- Civil penalties of up to $20,000 per violation for knowing violations
- In some circumstances, treble damages for actual harm caused
- Attorney fees and costs recoverable by the Attorney General
- Injunctive relief
Who enforces it: The Colorado Attorney General has exclusive enforcement authority. Private rights of action are limited — the primary enforcement mechanism is AG action. The AG also has rulemaking authority for the law's implementation.
Why the enforcement is significant: The AG has broad investigative powers, can seek injunctive relief that forces a company to stop using a non-compliant AI system (which can be immediately operationally disruptive), and can seek civil penalties on a per-violation basis. A finding that multiple consumers were affected by an algorithm that failed to meet the law's requirements — no impact assessment, no consumer notice, no right to appeal — can generate substantial aggregate penalties even if the per-violation amount is modest.
Effective date note: The compliance deadline is June 30, 2026. That is roughly three and a half months from the date of this article.
Where to read it: Colorado SB24-205 at the General Assembly; Colorado Consumer Protection Act — C.R.S. § 6-1-105
Texas — HB 149 / TRAIGA (Texas Responsible AI Governance Act)
The law: Texas HB 149, commonly called TRAIGA (Texas Responsible AI Governance Act). Enacted in 2025. This is Texas's comprehensive AI regulation.
Who it covers: Developers and deployers of high-risk AI systems as defined in the statute, operating in Texas or affecting Texas residents in consequential decisions.
Penalty structure: Texas creates a tiered penalty structure with explicit escalation for refusal to cure and for continuing violations.
Curable violations:
- First violation where the developer or deployer takes appropriate corrective action within the cure period: civil penalty between $2,000 and $10,000 depending on violation type
Uncurable violations (no cure available):
- Civil penalty between $80,000 and $200,000 per violation
Continuing violations:
- Additional penalty between $2,000 and $40,000 per day for each day a violation continues after notice and expiration of any cure period
What "curable" means: A curable violation is one the Texas Attorney General determines can be corrected. If the AG sends notice and the developer or deployer corrects the violation within the statutory cure period, the base penalty is at the lower tier. If the violation cannot be cured — meaning the harm already occurred and cannot be remediated — the higher uncurable-violation penalty applies. The per-day continuing violation penalty is designed to prevent companies from receiving notice and then dragging their feet on correction.
What triggers violations: Deploying a high-risk AI system without completing the required impact assessment. Using a high-risk AI system in ways that produce discriminatory outcomes without adequate safeguards. Failing to provide required consumer disclosures. Failing to maintain the required documentation.
Enforcement: By the Texas Attorney General. The AG has authority to investigate, issue civil investigative demands, and bring civil enforcement actions.
Where to read it: Texas HB 149 (available through the Texas Legislature Online)
California — CCPA Automated Decision-Making Technology Rules (ADMT)
The law: California's Consumer Privacy Act and the California Privacy Protection Agency's Automated Decision-Making Technology (ADMT) regulations, which expand CCPA protections to cover AI systems used in consequential decisions about California residents.
Who it covers: Businesses that collect California residents' personal information and use automated decision-making technology in consequential decisions.
Penalty structure: Violations of CCPA, including the ADMT regulations, are subject to:
- $2,500 per unintentional violation
- $7,500 per intentional violation
- Private right of action for data breach situations with statutory damages of $100 to $750 per consumer per incident, or actual damages, whichever is greater
Who enforces it: The California Privacy Protection Agency (CPPA) has primary enforcement authority. (California Privacy Protection Agency) The California AG retains enforcement authority as well.
What's unique about California: California's ADMT rules add a right of access (consumers can request information about AI decisions affecting them), a right to opt out of certain ADM uses, and a right to appeal automated decisions. Non-compliance with these rights creates separate violation events. Given California's population and the number of California-based businesses, this framework creates substantial cumulative exposure.
Where to read it: California Civil Code § 1798.100 et seq. (CCPA); California Privacy Protection Agency Regulations
Federal — EEOC and Title VII
The law: Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e-2; the Age Discrimination in Employment Act (ADEA), 29 U.S.C. § 621; the Americans with Disabilities Act (ADA), 42 U.S.C. § 12101.
These are not AI-specific statutes — they're the foundational federal anti-discrimination laws. But the EEOC has been explicit that they apply to AI hiring tools, and that employers bear responsibility for discriminatory outcomes of AI systems regardless of whether the employer built the tool or bought it from a vendor. (EEOC AI Initiative)
Penalty exposure: Federal civil rights cases can produce:
- Compensatory and punitive damages up to $300,000 per plaintiff (for employers with 500+ employees under Title VII)
- Back pay and front pay
- Reinstatement
- Attorney fees and costs
- Injunctive relief
For class actions involving widespread discriminatory impact — which is exactly the scenario that AI bias creates — the per-plaintiff caps multiply across all affected individuals, and the litigation cost itself becomes a significant driver.
Where to read it: 42 U.S.C. § 2000e-2; EEOC Guidance on AI
Putting It Together: The Multi-State Exposure Problem
If you're a company with employees in Illinois and New York City, using AI for hiring, and you have customers in California and Colorado — which many mid-size technology and professional services companies do — here's what your penalty exposure looks like for the same AI hiring tool:
- NYC LL144: Daily violations for each use without a bias audit and each failure to provide notice; per-use fines up to $1,500
- Illinois HB3773: Per-aggrieved-party penalties up to $70,000 for each person discriminated against
- Colorado SB 24-205: Up to $20,000 per knowing violation under the Colorado CPA
- California ADMT/CCPA: Up to $7,500 per intentional violation
- Federal (EEOC/Title VII): Compensatory and punitive damages up to $300,000 per plaintiff in individual cases; much more in class actions
These don't stack automatically — you'd need an enforcement action or lawsuit in each jurisdiction for each to apply. But they can apply simultaneously. A large employer with a discriminatory AI hiring tool could face IDHR charges from Illinois employees, DCWP violations for NYC candidates, and a Title VII class action covering all US employees in the same fact pattern. The different legal frameworks are analyzing the same underlying conduct through different lenses.
What Good Documentation Does for You
In every jurisdiction described in this article, documented compliance effort is a meaningful defense. Not a perfect defense — if your tool is actively discriminating, documentation of good-faith effort won't eliminate liability. But it does three things.
First, it reduces the probability that a problem exists. The process of building documentation — inventorying your tools, reviewing outcomes, assessing risks — surfaces problems before they become enforcement events.
Second, it reduces penalties when things do go wrong. Every framework described here treats knowing violations differently from negligent or inadvertent ones. A company that has documented its impact assessments, bias audit results, and notice procedures, and that discovers a problem and acts on it, is in a meaningfully different legal posture than a company that has none of that.
Third, it shortens the investigation. When DCWP, IDHR, or the Colorado AG investigates a complaint, the first thing they ask for is documentation. A company that can produce its bias audit results, its notice records, and its risk management procedures within a few days of a complaint demonstrates that compliance was taken seriously. A company that produces nothing credible invites deeper scrutiny and more aggressive enforcement.
The penalties in this article are real. The enforcement mechanisms are functional. But the compliance programs that prevent penalties are more accessible than many businesses realize. The documentation required by these laws is not primarily a technical challenge — it's a governance and process challenge. And for most companies, the work of building that documentation is well within reach.
If you're operating in multiple states, we've built state-specific compliance documents for each of the major frameworks covered here — including Illinois HB3773, Colorado SB 24-205, and California's ADMT rules — so you don't have to piece together requirements from raw statute text.
Sources — Every penalty figure in this article was verified against enacted statute text at these URLs:
- NYC Administrative Code § 20-870 through 20-872 (Local Law 144 of 2021) — AEDT bias audit mandate and penalty structure; $500/$1,500 per-violation fines.
- NYC Department of Consumer and Worker Protection — Automated Employment Decision Tools — DCWP enforcement guidance and FAQ.
- Illinois Human Rights Act § 2-102(L) — 775 ILCS 5/2-102(L) — HB3773 AI employment provision.
- Illinois Human Rights Act § 8A-104 — 775 ILCS 5/8A-104 — Penalty schedule: $16K/$42.5K/$70K per violation per aggrieved party.
- Illinois Artificial Intelligence Video Interview Act — 820 ILCS 42 — $500/day civil penalty for consent violations.
- Colorado SB24-205 at the Colorado General Assembly — High-risk AI system requirements; deceptive trade practice enforcement.
- Colorado Consumer Protection Act — C.R.S. § 6-1-105 — $20,000 per knowing violation penalty.
- Texas Legislature Online — HB 149 (TRAIGA) — $10K/$12K curable; $80K–$200K uncurable; $2K–$40K per day continuing violation.
- California Civil Code § 1798.100 et seq. (CCPA) — $2,500/$7,500 per violation; $100–$750 per consumer private right of action.
- California Privacy Protection Agency Regulations (ADMT) — Automated decision-making technology rules.
- EEOC AI Initiative and Algorithmic Fairness — Federal civil rights application to AI employment tools.
- 42 U.S.C. § 2000e-2 — Title VII of the Civil Rights Act — Federal anti-discrimination statute underlying EEOC enforcement.
Disclaimer: This article is for general informational purposes only and does not constitute legal advice. Penalty amounts are statutory maximums; actual enforcement outcomes depend on facts, enforcement discretion, and applicable mitigating or aggravating circumstances. Consult a qualified attorney for guidance specific to your situation and jurisdiction.
What 'Per-Violation' Penalties Actually Mean
4 facts
- [1]NYC Administrative Code § 20-870 through 20-872 — Local Law 144 (AEDT Bias Audit) (opens in new tab)
- [2]Illinois Human Rights Act § 2-102 — 775 ILCS 5/2-102 (HB3773 AI Provision) (opens in new tab)
- [3]Colorado SB 24-205 — Consumer Protections for Artificial Intelligence (opens in new tab)
- [4]California Civil Code § 1798.100 et seq. — CCPA (including ADMT Regulations) (opens in new tab)
- [5]EEOC Technical Assistance — Assessing Adverse Impact in Software, Algorithms, and AI (opens in new tab)
- [6]Texas HB 149 (TRAIGA) — Texas Responsible AI Governance Act (opens in new tab)
Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.
More from the blog
What Does AI Compliance Actually Cost a Small Business in 2026?
AI and HIPAA: What Healthcare Businesses Must Do Now
Get your compliance documentation done
Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.
Browse Compliance Packages