
Colorado's AI Law Takes Effect in 91 Days. Here's What It Requires.
Two-Sentence Summary
Colorado's SB 24-205, as amended by SB 25B-004, imposes enforceable obligations on developers and deployers of high-risk artificial intelligence systems beginning June 30, 2026 — now just 91 days away. Businesses that use AI in consequential decisions about employment, housing, credit, health care, insurance, education, or legal services must have risk management policies, impact assessments, consumer notices, and disclosure frameworks in place by that date or face civil penalties of up to $20,000 per violation under the Colorado Consumer Protection Act.
Colorado's SB 24-205, as amended by SB 25B-004, imposes enforceable obligations on developers and deployers of high-risk artificial intelligence systems beginning June 30, 2026 — now just 91 days away. Businesses that use AI in consequential decisions about employment, housing, credit, health care, insurance, education, or legal services must have risk management policies, impact assessments, consumer notices, and disclosure frameworks in place by that date or face civil penalties of up to $20,000 per violation under the Colorado Consumer Protection Act.
What the Law Is and What Changed
The Colorado General Assembly passed SB 24-205, "Consumer Protections for Artificial Intelligence," during its 2024 regular session. Governor Polis signed it on May 17, 2024. The law creates Part 17 of Article 1 of Title 6 of the Colorado Revised Statutes (C.R.S. §§ 6-1-1701 through 6-1-1707), establishing what is now the nation's first comprehensive state framework aimed at preventing algorithmic discrimination in high-risk AI systems.
The original law set February 1, 2026, as the date obligations would take effect. That changed during an August 2025 extraordinary session. SB 25B-004, "Increase Transparency for Algorithmic Systems," was introduced on August 21, 2025, passed both chambers, and was signed by the Governor on August 28, 2025.
SB 25B-004 made one structural change: it replaced every instance of "February 1, 2026" with "June 30, 2026" across three sections of the statute — C.R.S. § 6-1-1702 (developer duties), § 6-1-1703 (deployer duties), and § 6-1-1704 (consumer disclosure). No changes were made to the definitions, enforcement, affirmative defense, permitted activities, or rulemaking provisions.
Who the Law Covers
Under C.R.S. § 6-1-1701, a "developer" is a person doing business in Colorado that develops or intentionally and substantially modifies an AI system. A "deployer" is a person doing business in Colorado that deploys (uses) a high-risk AI system. A "consumer" is defined as a Colorado resident.
The law applies exclusively to "high-risk" AI systems: those that make, or are a substantial factor in making, a "consequential decision." The statute defines consequential decisions as those affecting education, employment, financial or lending services, essential government services, health-care services, housing, insurance, or legal services (§ 6-1-1701(2)).
No Legislative Rescue Is Coming
A search of the Colorado General Assembly bill database for "artificial intelligence" and "algorithmic" across the 2026 Regular Session reveals that no bill has been introduced to amend, revise, delay, or repeal Part 17.
The legislature has had four opportunities to substantively revise the AI law, and none has produced a revision to the law's core requirements:
- SB 25-318, "Artificial Intelligence Consumer Protections" — indefinitely postponed by a 5-2 vote in the Senate Committee on Business, Labor, & Technology on May 5, 2025
- HB 25B-1009, "Artificial Intelligence Systems" — indefinitely postponed during the August 2025 extraordinary session
- SB 25B-008, "Tech-Neutral Anti-Discrimination Clarification Act" — indefinitely postponed during the August 2025 extraordinary session
- 2026 Regular Session — zero amending bills introduced
The General Assembly is scheduled to adjourn sine die on May 13, 2026 — just 43 days from today. With no amending bill introduced and the session winding down, the probability of legislative relief before June 30 is diminishing.
The Penalty Structure
Violations of the AI law are routed through the Colorado Consumer Protection Act via C.R.S. § 6-1-1706(2) and § 6-1-105. Under C.R.S. § 6-1-112(1)(a), the AG or a district attorney may bring a civil action, and any person who violates any provision shall pay a civil penalty of not more than $20,000 for each violation. Where the violation involves a consumer who is 60 years of age or older, the maximum penalty increases to $50,000 per violation (C.R.S. § 6-1-112(1)(c)).
The statute specifies that a violation constitutes a separate violation with respect to each consumer. A single non-compliant AI system affecting 1,000 consumers creates theoretical exposure of up to $20 million. There is no private right of action — enforcement is AG-exclusive under § 6-1-1706(1).
The Affirmative Defense
The statute provides an affirmative defense under § 6-1-1706(3) for businesses that meet both conditions:
- Comply with a nationally or internationally recognized AI risk management framework designated by the act or the Attorney General
- Take measures to discover and correct violations of the act
The statute does not name the NIST AI Risk Management Framework specifically — it references "a nationally or internationally recognized risk management framework for artificial intelligence systems." The NIST AI RMF and ISO/IEC 42001 are the most prominent frameworks that fit this description.
Both conditions must be met. Framework compliance alone is not sufficient without the discovery-and-correction mechanism.
What Other States Are Doing
Colorado is not alone, but it is ahead of the curve:
- Texas: HB 149 (TRAIGA) — signed June 22, 2025, effective January 1, 2026. Covers all AI deployers and developers in Texas with no high-risk carveout.
- Illinois: Public Act 103-0804 — effective January 1, 2026. Amends the Illinois Human Rights Act to prohibit discriminatory AI in employment and require employee notification.
- California: CPPA ADMT regulations — risk assessments required January 1, 2026. Consumer ADMT notice and opt-out requirements begin January 1, 2027.
What Employers Should Do Now
Stop waiting for the legislature. No bill to amend Part 17 has been introduced in the 2026 session. The GA adjourns in 43 days. The legislative record shows a consistent inability to muster the votes to revise this law. Do not build your compliance plan around the assumption of legislative relief.
The statute is self-executing. The AG holds exclusive enforcement authority (§ 6-1-1706(1)) and is not required to complete rulemaking before bringing an enforcement action. The law's requirements take effect June 30, 2026, with or without implementing rules in place.
Prioritize the affirmative defense. Adopt the NIST AI RMF or ISO/IEC 42001 (or both), document your compliance, and build internal discovery-and-correction mechanisms. This is the closest thing the statute offers to a safe harbor.
Our Colorado SB 24-205 Compliance Package includes all 8 required documents — risk management policy, impact assessment, consumer notice, developer disclosure, and more. Built from the enacted statute text, not summaries. $449, instant download.
Sources
- Colorado SB 24-205 — Consumer Protections for Artificial Intelligence
- Colorado SB 25B-004 — Effective Date Extension
- Colorado SB 25-318 — Failed 2025 Amendment Attempt
- Colorado General Assembly — 2026 Session AI Bill Search
- Texas HB 149 (TRAIGA) — Enrolled Text
- Illinois Public Act 103-0804
- California CPPA — ADMT Regulation Timeline
What Is an Affirmative Defense and Why Does It Matter Here?
Imagine you're playing a game and someone accuses you of breaking a rule. Normally, you'd just say 'I didn't do it' and try to prove you're innocent. But what if the game had a special rule that said: 'If you were following the official playbook, the other person has to prove you cheated — not the other way around.'
That's basically what an affirmative defense is. It doesn't mean you can't get in trouble at all. It means that if you followed a specific set of steps, the burden flips — and the person accusing you has a much harder job proving their case.
In Colorado's AI law, the affirmative defense works like this: if your business (1) follows a nationally or internationally recognized AI risk management framework, AND (2) takes genuine steps to find and fix any problems with your AI systems, the law gives you a legal shield. You're not automatically off the hook, but you get a 'rebuttable presumption' that you used reasonable care. That means anyone trying to take legal action against you starts at a disadvantage because they have to overcome the assumption that you did the right thing.
The statute does not name the NIST AI Risk Management Framework specifically — it says 'a nationally or internationally recognized risk management framework for artificial intelligence systems.' NIST AI RMF and ISO/IEC 42001 are the most prominent frameworks that fit this description. But the affirmative defense requires BOTH conditions — framework compliance alone is not enough without the discovery-and-correction mechanism.
The practical takeaway: building your compliance program around a recognized framework isn't just good practice — it's a legal strategy that gives you real protection under C.R.S. § 6-1-1706(3) if something goes wrong.
4 facts
8 references
- [1]Colorado SB 24-205 — Consumer Protections for Artificial Intelligence (opens in new tab)
- [2]Colorado SB 25B-004 — Effective Date Extension (opens in new tab)
- [3]Colorado SB 25-318 — Failed 2025 Amendment Attempt (opens in new tab)
- [4]Colorado General Assembly — 2026 Session AI Bill Search (opens in new tab)
- [5]Texas HB 149 (TRAIGA) — Enrolled Text (opens in new tab)
- [6]Illinois Public Act 103-0804 — AI Employment Provisions (opens in new tab)
- [7]California CPPA — ADMT Regulation Timeline (opens in new tab)
- [8]ISO/IEC 42001:2023 — Artificial Intelligence Management System (opens in new tab)
Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.
More from the blog
Workday AI Hiring Lawsuit Could Reshape Employer Liability
A federal court is testing whether AI vendors — not just employers — can be sued for discriminatory hiring outcomes. The certified class could include hundreds of millions of applicants.
Colorado AI Compliance for HR Software Companies: What SB 24-205 Means for Your Product
HR software that screens candidates, scores performance, or ranks employees is classified as high-risk AI under Colorado's law. The June 30, 2026 deadline applies to both the companies that build these tools and the HR teams that use them.
Do I Need AI Compliance? A Decision Framework for Every Business Using AI
Not sure if AI compliance applies to your business? Walk through four questions — and know exactly which laws apply, which documents you need, and where to start.
Operating in Multiple States? Here's How AI Compliance Stacks Up Across 15 Jurisdictions
Colorado, California, Texas, Illinois, and NYC all have active AI laws — and they don't all require the same things. If you operate in multiple states, here's what applies to you and why.
Oregon Consumer Privacy Act: What Your Business Needs to Know About AI Profiling Requirements
Oregon's privacy law has been in effect since July 2024, requires data protection assessments for AI profiling, and flatly prohibits processing personal data of consumers under 16 for targeted advertising or data sales — a protection not found in most other state laws. The 30-day cure period effectively expired for most businesses on January 1, 2026 (Oregon Laws 2025, c.417).
What Is an AI Impact Assessment? The Document Every State Law Now Requires
Colorado, California, and Illinois all require some version of an AI impact assessment — but they don't call it the same thing or require the same format. Here's what every version has in common, and what each state specifically demands.
What Is a High-Risk AI System? A Plain-Language Guide for Business Owners
Three different laws. Three different definitions of 'high-risk AI.' If your business uses AI to make decisions about people, here's how to figure out which rules apply to you.
The Federal Government Quietly Removed Its AI Hiring Guidance. Four States Are Writing Their Own.
The federal government removed every page of AI hiring guidance it ever published. Over a year later, the pages are still down. Four states wrote their own — and none of them agree.
AI governance framework checklist: what every enacted state law actually requires
Colorado, Texas, and Illinois all passed AI laws with deadlines in early 2026 — and none of them are identical. Here's the one compliance checklist that covers all three at once.
You're HIPAA-Compliant. That's Not Enough Anymore.
HIPAA protects patient records. It has nothing to say about whether the AI making decisions about those patients is fair. New rules are filling that gap — and they apply to you even if your HIPAA program is airtight.
The NIST AI Risk Management Framework: What It Is and Why Colorado Made It a Legal Shield
The US government published a free framework for managing AI risk — and Colorado's AI law turns following it into a legal shield. If something goes wrong with your AI, this is the document that shifts the burden of proof.
Texas TRAIGA (HB 149): What the Texas Responsible AI Governance Act Requires and How to Comply
Texas passed an AI law that applies to every business — no exemptions for small companies, no carveout for low-risk tools. It's already in effect, and a single uncurable violation starts at $80,000.
What Does AI Compliance Actually Cost a Small Business in 2026?
AI compliance can cost $49 or $50,000 — depending on what you actually need. Here's what each option costs in real numbers, so you can stop guessing and start budgeting.
AI Compliance Penalties by State: What Happens If You Ignore the Law
"Per violation" sounds like one fine. It isn't. Here's what the penalty math actually looks like state by state — and why the numbers can compound into company-ending territory fast.
AI and HIPAA: What Healthcare Businesses Must Do Now
If an AI tool touches patient data at your healthcare organization, HIPAA applies — and most vendor contracts aren't written to cover it. Here's what you need before you deploy.
EU AI Act Compliance Checklist: What US Businesses Need Before August 2026
Europe's AI law applies to US companies — even ones with no European office. If your AI is used by anyone in the EU, the deadline is August 2026 and the fines are calculated on your global revenue.
ISO 42001: The AI Certification Your Enterprise Clients Will Soon Require
Enterprise clients are starting to require ISO 42001 certification before they'll buy AI products — the same way they require SOC 2. Here's what the standard actually requires and why getting it early is a competitive advantage.
What Is an AI Bias Audit and Does Your Business Need One?
New York City requires an annual test of any AI hiring tool to check whether it's filtering out one group of people more than others. If you hire in NYC, this isn't optional — here's what the audit actually involves.
California Just Finalized Its AI Regulations. Here's What Your Business Actually Needs to Do.
California's AI rules are already in effect — and the agency enforcing them just handed out its largest fine ever. Here's what your business needs to do and when.
Colorado's AI Law (SB 24-205): What It Requires and When It Takes Effect
Colorado passed the most detailed AI law in the country. If your business uses AI to make decisions about people, here's what you need to have ready by June 30.
Get your compliance documentation done
Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.
Browse Compliance Packages













