Skip to main content
Back to Blog
AI Compliance Penalties by State: What Happens If You Ignore the Law
penaltiesenforcementstate AI lawscompliancefines

AI Compliance Penalties by State: What Happens If You Ignore the Law

AI Compliance Documents Team18 min read

Two-Sentence Summary

State and federal governments have enacted specific, escalating financial penalties for businesses that use AI in hiring, lending, insurance, and consumer decisions without proper documentation, bias audits, and consumer notices. This article breaks down the exact penalty amounts, per-violation structures, and enforcement mechanisms in New York City, Illinois, Colorado, Texas, California, and under federal civil rights law — so you can see precisely what's at stake if you don't comply.

One of the most reliable ways to build a compliance program that actually gets done is to understand exactly what happens if you don't. Abstract regulatory obligations are easy to defer. Concrete penalty amounts attached to real enforcement mechanisms are a different conversation.

This article does one thing: it tells you what the penalties are. Not estimates, not ranges derived from legal theory — the actual figures in the actual statutes, with citations you can verify. The law in this area is moving fast, and several states have enacted penalty structures in the last two years that most businesses outside of compliance and legal teams don't know about.

Every number in this article is drawn from enacted statute text. Where the law structures penalties in tiers, we've explained the tiers. Where the law creates per-day accrual of violations, we've noted that, because it's the part that turns a compliance oversight into a six-figure liability.

New York City — Local Law 144 (Automated Employment Decision Tools)

The law: NYC Administrative Code § 20-872, enacted as Local Law 144 of 2021. Enforcement began July 5, 2023.

Who it covers: Employers and employment agencies that use automated employment decision tools (AEDTs) for employment decisions affecting candidates or employees who work in New York City. An AEDT is any computational process using machine learning, statistical modeling, data analytics, or AI that produces a score, classification, or recommendation used to substantially assist or replace discretionary decision making in hiring.

Penalty structure:

  • First violation: up to $500
  • Each subsequent violation: up to $1,500

Per day and per use: The key detail is that a "violation" is not just owning a non-compliant tool. Using a non-audited tool to make a decision is a violation. Using it without providing the required ten-business-day advance notice to a candidate is a violation. Each use, each day, each individual affected by the tool's output creates a potential separate violation event. Enforcement is by the NYC Department of Consumer and Worker Protection (DCWP).

What triggers a violation: Failing to conduct an annual bias audit of the tool before use. Failing to publicly post the bias audit summary on the employer's website. Failing to provide candidates with notice at least ten business days before using the tool. Failing to provide current employees with similar notice.

The practical exposure: An employer using a non-audited AI hiring tool for six months while processing 500 applications per month is potentially looking at hundreds of violations at up to $1,500 each. DCWP has authority to investigate complaints filed by candidates or employees and to initiate investigations on its own.

Where to read it: NYC Admin. Code § 20-871, § 20-872; NYC DCWP Guidance


Illinois — Human Rights Act (HB3773 AI Provisions)

The law: 775 ILCS 5/2-102(L), added by Public Act 103-0804 (HB3773), effective January 1, 2026. Penalties are in 775 ILCS 5/8A-104.

Who it covers: Any employer with employees in Illinois that uses artificial intelligence in employment decisions — recruitment, hiring, promotion, renewal, training, discharge, discipline, or terms and conditions of employment.

Penalty structure (per violation, per aggrieved party):

  • First violation: up to $16,000
  • Second violation within five years: up to $42,500
  • Two or more prior violations within seven years: up to $70,000

What "per aggrieved party" means: This is the number that changes the math. The penalties above are not per case — they're per individual person affected. If your AI hiring tool creates a discriminatory effect on 200 applicants and the Illinois Department of Human Rights (IDHR) treats each as a separate aggrieved party, the per-violation amount applies to each one. At the $16,000 first-violation rate, that is $3.2 million in potential penalties. At the $70,000 escalated rate, that is $14 million.

What triggers a violation: Using AI in an employment decision in a way that "has the effect of" discriminating against protected classes — including race, color, religion, sex, national origin, ancestry, age, disability, marital status, military status, sexual orientation, or pregnancy. Also failing to provide notice to employees that AI is being used in employment decisions (the exact notice format is being finalized by IDHR as of this writing).

Additional remedies: Beyond civil penalties, the Illinois Human Rights Commission can order: cease and desist, hiring or reinstatement with back pay, actual damages to complainants, attorney fees and expert witness fees, and any other relief necessary to make the complainant whole.

Enforcement: By IDHR through its charge-filing process. Any employee or applicant can file a charge. IDHR can also initiate investigations on its own.

Where to read it: 775 ILCS 5/2-102(L) (the AI provision); 775 ILCS 5/8A-104 (penalty schedule); IDHR Legislative Update


Illinois — Artificial Intelligence Video Interview Act

The law: 820 ILCS 42, effective January 1, 2020. Illinois's first AI employment law, predating HB3773 by six years.

Who it covers: Any employer that uses AI to analyze video interviews of job applicants.

What it requires: Employers must notify applicants that AI may be used to analyze the video interview. Employers must explain how the AI works and what characteristics it evaluates. Employers must obtain the applicant's consent before using AI analysis. Employers may not share video interviews except with those whose expertise is necessary to evaluate the candidate.

Penalty structure:

  • Civil penalties of up to $500 per day for violations of the consent requirement

The interaction with HB3773: The Video Interview Act and HB3773's AI employment provisions are separate. Complying with one doesn't satisfy the other. If you use AI video interview tools in Illinois, you have obligations under both statutes. Both are actively enforceable today.

Where to read it: 820 ILCS 42


Colorado — SB 24-205 (Consumer Protections for Artificial Intelligence)

The law: C.R.S. § 6-1-1701 et seq., enacted May 17, 2024, effective June 30, 2026 (extended from February 1, 2026 by SB25B-004).

Who it covers: Developers and deployers of high-risk AI systems used to make or substantially factor into consequential decisions affecting Colorado consumers in employment, lending, insurance, housing, healthcare, education, or legal services.

Penalty structure: Colorado's SB 24-205 does not create its own standalone penalty schedule. Instead, violations are treated as deceptive trade practices under the Colorado Consumer Protection Act (C.R.S. § 6-1-105 et seq.).

The Colorado Consumer Protection Act provides:

  • Civil penalties of up to $20,000 per violation for knowing violations
  • In some circumstances, treble damages for actual harm caused
  • Attorney fees and costs recoverable by the Attorney General
  • Injunctive relief

Who enforces it: The Colorado Attorney General has exclusive enforcement authority. Private rights of action are limited — the primary enforcement mechanism is AG action. The AG also has rulemaking authority for the law's implementation.

Why the enforcement is significant: The AG has broad investigative powers, can seek injunctive relief that forces a company to stop using a non-compliant AI system (which can be immediately operationally disruptive), and can seek civil penalties on a per-violation basis. A finding that multiple consumers were affected by an algorithm that failed to meet the law's requirements — no impact assessment, no consumer notice, no right to appeal — can generate substantial aggregate penalties even if the per-violation amount is modest.

Effective date note: The compliance deadline is June 30, 2026. That is roughly three and a half months from the date of this article.

Where to read it: Colorado SB24-205 at the General Assembly; Colorado Consumer Protection Act — C.R.S. § 6-1-105


Texas — HB 149 / TRAIGA (Texas Responsible AI Governance Act)

The law: Texas HB 149, commonly called TRAIGA (Texas Responsible AI Governance Act). Enacted in 2025. This is Texas's comprehensive AI regulation.

Who it covers: Developers and deployers of high-risk AI systems as defined in the statute, operating in Texas or affecting Texas residents in consequential decisions.

Penalty structure: Texas creates a tiered penalty structure with explicit escalation for refusal to cure and for continuing violations.

Curable violations:

  • First violation where the developer or deployer takes appropriate corrective action within the cure period: civil penalty between $2,000 and $10,000 depending on violation type

Uncurable violations (no cure available):

  • Civil penalty between $80,000 and $200,000 per violation

Continuing violations:

  • Additional penalty between $2,000 and $40,000 per day for each day a violation continues after notice and expiration of any cure period

What "curable" means: A curable violation is one the Texas Attorney General determines can be corrected. If the AG sends notice and the developer or deployer corrects the violation within the statutory cure period, the base penalty is at the lower tier. If the violation cannot be cured — meaning the harm already occurred and cannot be remediated — the higher uncurable-violation penalty applies. The per-day continuing violation penalty is designed to prevent companies from receiving notice and then dragging their feet on correction.

What triggers violations: Deploying a high-risk AI system without completing the required impact assessment. Using a high-risk AI system in ways that produce discriminatory outcomes without adequate safeguards. Failing to provide required consumer disclosures. Failing to maintain the required documentation.

Enforcement: By the Texas Attorney General. The AG has authority to investigate, issue civil investigative demands, and bring civil enforcement actions.

Where to read it: Texas HB 149 (available through the Texas Legislature Online)


California — CCPA Automated Decision-Making Technology Rules (ADMT)

The law: California's Consumer Privacy Act and the California Privacy Protection Agency's Automated Decision-Making Technology (ADMT) regulations, which expand CCPA protections to cover AI systems used in consequential decisions about California residents.

Who it covers: Businesses that collect California residents' personal information and use automated decision-making technology in consequential decisions.

Penalty structure: Violations of CCPA, including the ADMT regulations, are subject to:

  • $2,500 per unintentional violation
  • $7,500 per intentional violation
  • Private right of action for data breach situations with statutory damages of $100 to $750 per consumer per incident, or actual damages, whichever is greater

Who enforces it: The California Privacy Protection Agency (CPPA) has primary enforcement authority. (California Privacy Protection Agency) The California AG retains enforcement authority as well.

What's unique about California: California's ADMT rules add a right of access (consumers can request information about AI decisions affecting them), a right to opt out of certain ADM uses, and a right to appeal automated decisions. Non-compliance with these rights creates separate violation events. Given California's population and the number of California-based businesses, this framework creates substantial cumulative exposure.

Where to read it: California Civil Code § 1798.100 et seq. (CCPA); California Privacy Protection Agency Regulations


Federal — EEOC and Title VII

The law: Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e-2; the Age Discrimination in Employment Act (ADEA), 29 U.S.C. § 621; the Americans with Disabilities Act (ADA), 42 U.S.C. § 12101.

These are not AI-specific statutes — they're the foundational federal anti-discrimination laws. But the EEOC has been explicit that they apply to AI hiring tools, and that employers bear responsibility for discriminatory outcomes of AI systems regardless of whether the employer built the tool or bought it from a vendor. (EEOC AI Initiative)

Penalty exposure: Federal civil rights cases can produce:

  • Compensatory and punitive damages up to $300,000 per plaintiff (for employers with 500+ employees under Title VII)
  • Back pay and front pay
  • Reinstatement
  • Attorney fees and costs
  • Injunctive relief

For class actions involving widespread discriminatory impact — which is exactly the scenario that AI bias creates — the per-plaintiff caps multiply across all affected individuals, and the litigation cost itself becomes a significant driver.

Where to read it: 42 U.S.C. § 2000e-2; EEOC Guidance on AI


Putting It Together: The Multi-State Exposure Problem

If you're a company with employees in Illinois and New York City, using AI for hiring, and you have customers in California and Colorado — which many mid-size technology and professional services companies do — here's what your penalty exposure looks like for the same AI hiring tool:

  • NYC LL144: Daily violations for each use without a bias audit and each failure to provide notice; per-use fines up to $1,500
  • Illinois HB3773: Per-aggrieved-party penalties up to $70,000 for each person discriminated against
  • Colorado SB 24-205: Up to $20,000 per knowing violation under the Colorado CPA
  • California ADMT/CCPA: Up to $7,500 per intentional violation
  • Federal (EEOC/Title VII): Compensatory and punitive damages up to $300,000 per plaintiff in individual cases; much more in class actions

These don't stack automatically — you'd need an enforcement action or lawsuit in each jurisdiction for each to apply. But they can apply simultaneously. A large employer with a discriminatory AI hiring tool could face IDHR charges from Illinois employees, DCWP violations for NYC candidates, and a Title VII class action covering all US employees in the same fact pattern. The different legal frameworks are analyzing the same underlying conduct through different lenses.

What Good Documentation Does for You

In every jurisdiction described in this article, documented compliance effort is a meaningful defense. Not a perfect defense — if your tool is actively discriminating, documentation of good-faith effort won't eliminate liability. But it does three things.

First, it reduces the probability that a problem exists. The process of building documentation — inventorying your tools, reviewing outcomes, assessing risks — surfaces problems before they become enforcement events.

Second, it reduces penalties when things do go wrong. Every framework described here treats knowing violations differently from negligent or inadvertent ones. A company that has documented its impact assessments, bias audit results, and notice procedures, and that discovers a problem and acts on it, is in a meaningfully different legal posture than a company that has none of that.

Third, it shortens the investigation. When DCWP, IDHR, or the Colorado AG investigates a complaint, the first thing they ask for is documentation. A company that can produce its bias audit results, its notice records, and its risk management procedures within a few days of a complaint demonstrates that compliance was taken seriously. A company that produces nothing credible invites deeper scrutiny and more aggressive enforcement.

The penalties in this article are real. The enforcement mechanisms are functional. But the compliance programs that prevent penalties are more accessible than many businesses realize. The documentation required by these laws is not primarily a technical challenge — it's a governance and process challenge. And for most companies, the work of building that documentation is well within reach.

If you're operating in multiple states, we've built state-specific compliance documents for each of the major frameworks covered here — including Illinois HB3773, Colorado SB 24-205, and California's ADMT rules — so you don't have to piece together requirements from raw statute text.


Sources — Every penalty figure in this article was verified against enacted statute text at these URLs:

Disclaimer: This article is for general informational purposes only and does not constitute legal advice. Penalty amounts are statutory maximums; actual enforcement outcomes depend on facts, enforcement discretion, and applicable mitigating or aggravating circumstances. Consult a qualified attorney for guidance specific to your situation and jurisdiction.

What 'Per-Violation' Penalties Actually Mean
When you read that a law imposes a penalty of '$16,000 per violation,' it sounds like a single fine. It's not. And the difference between a single fine and a per-violation penalty structure is often the difference between a manageable expense and a company-ending liability. Think of it like parking tickets. If you park illegally once, you get one ticket — maybe $75. Annoying, but not catastrophic. But imagine your company operates a fleet of 200 delivery trucks, and every single one of them is parked illegally in the same lot every day for six months. You don't owe one $75 ticket. You owe one ticket per truck, per day. That's 200 trucks times 180 days times $75 — $2.7 million. The per-unit, per-day structure is what turns a small fine into a massive one. AI penalty structures work the same way. Illinois HB3773 assesses penalties 'per violation, per aggrieved party.' If your AI hiring tool discriminates against 200 applicants, the $16,000 first-violation cap applies to each of those 200 people separately — that's $3.2 million. NYC Local Law 144 creates a separate violation for each use of a non-audited tool on each candidate, each day. Texas TRAIGA adds a per-day continuing penalty on top of the base fine for every day you fail to correct a violation after receiving notice. This is why regulators structure penalties this way: they want the cost of non-compliance to scale with the harm. A company that discriminates against five people faces a proportionate penalty. A company whose automated system discriminates against five thousand people faces a penalty that reflects the scope of that harm. The 'per violation' language is what makes these laws dangerous to ignore — and it's the detail most businesses miss when they glance at a penalty amount and think it sounds manageable.
4 facts

Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.

More from the blog

Get your compliance documentation done

Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.

Browse Compliance Packages