Skip to main content
Back to Blog
Colorado's AI Law Takes Effect in 91 Days. Here's What It Requires.
ColoradoSB 24-205AI compliance deadlineJune 2026impact assessmentalgorithmic discrimination

Colorado's AI Law Takes Effect in 91 Days. Here's What It Requires.

AI Compliance Documents Team10 min read

Two-Sentence Summary

Colorado's SB 24-205, as amended by SB 25B-004, imposes enforceable obligations on developers and deployers of high-risk artificial intelligence systems beginning June 30, 2026 — now just 91 days away. Businesses that use AI in consequential decisions about employment, housing, credit, health care, insurance, education, or legal services must have risk management policies, impact assessments, consumer notices, and disclosure frameworks in place by that date or face civil penalties of up to $20,000 per violation under the Colorado Consumer Protection Act.

Colorado's SB 24-205, as amended by SB 25B-004, imposes enforceable obligations on developers and deployers of high-risk artificial intelligence systems beginning June 30, 2026 — now just 91 days away. Businesses that use AI in consequential decisions about employment, housing, credit, health care, insurance, education, or legal services must have risk management policies, impact assessments, consumer notices, and disclosure frameworks in place by that date or face civil penalties of up to $20,000 per violation under the Colorado Consumer Protection Act.

What the Law Is and What Changed

The Colorado General Assembly passed SB 24-205, "Consumer Protections for Artificial Intelligence," during its 2024 regular session. Governor Polis signed it on May 17, 2024. The law creates Part 17 of Article 1 of Title 6 of the Colorado Revised Statutes (C.R.S. §§ 6-1-1701 through 6-1-1707), establishing what is now the nation's first comprehensive state framework aimed at preventing algorithmic discrimination in high-risk AI systems.

The original law set February 1, 2026, as the date obligations would take effect. That changed during an August 2025 extraordinary session. SB 25B-004, "Increase Transparency for Algorithmic Systems," was introduced on August 21, 2025, passed both chambers, and was signed by the Governor on August 28, 2025.

SB 25B-004 made one structural change: it replaced every instance of "February 1, 2026" with "June 30, 2026" across three sections of the statute — C.R.S. § 6-1-1702 (developer duties), § 6-1-1703 (deployer duties), and § 6-1-1704 (consumer disclosure). No changes were made to the definitions, enforcement, affirmative defense, permitted activities, or rulemaking provisions.

Who the Law Covers

Under C.R.S. § 6-1-1701, a "developer" is a person doing business in Colorado that develops or intentionally and substantially modifies an AI system. A "deployer" is a person doing business in Colorado that deploys (uses) a high-risk AI system. A "consumer" is defined as a Colorado resident.

The law applies exclusively to "high-risk" AI systems: those that make, or are a substantial factor in making, a "consequential decision." The statute defines consequential decisions as those affecting education, employment, financial or lending services, essential government services, health-care services, housing, insurance, or legal services (§ 6-1-1701(2)).

No Legislative Rescue Is Coming

A search of the Colorado General Assembly bill database for "artificial intelligence" and "algorithmic" across the 2026 Regular Session reveals that no bill has been introduced to amend, revise, delay, or repeal Part 17.

The legislature has had four opportunities to substantively revise the AI law, and none has produced a revision to the law's core requirements:

  • SB 25-318, "Artificial Intelligence Consumer Protections" — indefinitely postponed by a 5-2 vote in the Senate Committee on Business, Labor, & Technology on May 5, 2025
  • HB 25B-1009, "Artificial Intelligence Systems" — indefinitely postponed during the August 2025 extraordinary session
  • SB 25B-008, "Tech-Neutral Anti-Discrimination Clarification Act" — indefinitely postponed during the August 2025 extraordinary session
  • 2026 Regular Session — zero amending bills introduced

The General Assembly is scheduled to adjourn sine die on May 13, 2026 — just 43 days from today. With no amending bill introduced and the session winding down, the probability of legislative relief before June 30 is diminishing.

The Penalty Structure

Violations of the AI law are routed through the Colorado Consumer Protection Act via C.R.S. § 6-1-1706(2) and § 6-1-105. Under C.R.S. § 6-1-112(1)(a), the AG or a district attorney may bring a civil action, and any person who violates any provision shall pay a civil penalty of not more than $20,000 for each violation. Where the violation involves a consumer who is 60 years of age or older, the maximum penalty increases to $50,000 per violation (C.R.S. § 6-1-112(1)(c)).

The statute specifies that a violation constitutes a separate violation with respect to each consumer. A single non-compliant AI system affecting 1,000 consumers creates theoretical exposure of up to $20 million. There is no private right of action — enforcement is AG-exclusive under § 6-1-1706(1).

The Affirmative Defense

The statute provides an affirmative defense under § 6-1-1706(3) for businesses that meet both conditions:

  1. Comply with a nationally or internationally recognized AI risk management framework designated by the act or the Attorney General
  2. Take measures to discover and correct violations of the act

The statute does not name the NIST AI Risk Management Framework specifically — it references "a nationally or internationally recognized risk management framework for artificial intelligence systems." The NIST AI RMF and ISO/IEC 42001 are the most prominent frameworks that fit this description.

Both conditions must be met. Framework compliance alone is not sufficient without the discovery-and-correction mechanism.

What Other States Are Doing

Colorado is not alone, but it is ahead of the curve:

  • Texas: HB 149 (TRAIGA) — signed June 22, 2025, effective January 1, 2026. Covers all AI deployers and developers in Texas with no high-risk carveout.
  • Illinois: Public Act 103-0804 — effective January 1, 2026. Amends the Illinois Human Rights Act to prohibit discriminatory AI in employment and require employee notification.
  • California: CPPA ADMT regulations — risk assessments required January 1, 2026. Consumer ADMT notice and opt-out requirements begin January 1, 2027.

What Employers Should Do Now

Stop waiting for the legislature. No bill to amend Part 17 has been introduced in the 2026 session. The GA adjourns in 43 days. The legislative record shows a consistent inability to muster the votes to revise this law. Do not build your compliance plan around the assumption of legislative relief.

The statute is self-executing. The AG holds exclusive enforcement authority (§ 6-1-1706(1)) and is not required to complete rulemaking before bringing an enforcement action. The law's requirements take effect June 30, 2026, with or without implementing rules in place.

Prioritize the affirmative defense. Adopt the NIST AI RMF or ISO/IEC 42001 (or both), document your compliance, and build internal discovery-and-correction mechanisms. This is the closest thing the statute offers to a safe harbor.

Our Colorado SB 24-205 Compliance Package includes all 8 required documents — risk management policy, impact assessment, consumer notice, developer disclosure, and more. Built from the enacted statute text, not summaries. $449, instant download.

Sources

What Is an Affirmative Defense and Why Does It Matter Here?

Imagine you're playing a game and someone accuses you of breaking a rule. Normally, you'd just say 'I didn't do it' and try to prove you're innocent. But what if the game had a special rule that said: 'If you were following the official playbook, the other person has to prove you cheated — not the other way around.'

That's basically what an affirmative defense is. It doesn't mean you can't get in trouble at all. It means that if you followed a specific set of steps, the burden flips — and the person accusing you has a much harder job proving their case.

In Colorado's AI law, the affirmative defense works like this: if your business (1) follows a nationally or internationally recognized AI risk management framework, AND (2) takes genuine steps to find and fix any problems with your AI systems, the law gives you a legal shield. You're not automatically off the hook, but you get a 'rebuttable presumption' that you used reasonable care. That means anyone trying to take legal action against you starts at a disadvantage because they have to overcome the assumption that you did the right thing.

The statute does not name the NIST AI Risk Management Framework specifically — it says 'a nationally or internationally recognized risk management framework for artificial intelligence systems.' NIST AI RMF and ISO/IEC 42001 are the most prominent frameworks that fit this description. But the affirmative defense requires BOTH conditions — framework compliance alone is not enough without the discovery-and-correction mechanism.

The practical takeaway: building your compliance program around a recognized framework isn't just good practice — it's a legal strategy that gives you real protection under C.R.S. § 6-1-1706(3) if something goes wrong.

4 facts

Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.

More from the blog

Mobley v. WorkdayAI hiring

Workday AI Hiring Lawsuit Could Reshape Employer Liability

A federal court is testing whether AI vendors — not just employers — can be sued for discriminatory hiring outcomes. The certified class could include hundreds of millions of applicants.

13 min read
OregonCPA

Oregon Consumer Privacy Act: What Your Business Needs to Know About AI Profiling Requirements

Oregon's privacy law has been in effect since July 2024, requires data protection assessments for AI profiling, and flatly prohibits processing personal data of consumers under 16 for targeted advertising or data sales — a protection not found in most other state laws. The 30-day cure period effectively expired for most businesses on January 1, 2026 (Oregon Laws 2025, c.417).

14 min read
AI impact assessmentrisk assessment

What Is an AI Impact Assessment? The Document Every State Law Now Requires

Colorado, California, and Illinois all require some version of an AI impact assessment — but they don't call it the same thing or require the same format. Here's what every version has in common, and what each state specifically demands.

16 min read
HealthcareHIPAA

You're HIPAA-Compliant. That's Not Enough Anymore.

HIPAA protects patient records. It has nothing to say about whether the AI making decisions about those patients is fair. New rules are filling that gap — and they apply to you even if your HIPAA program is airtight.

14 min read
HIPAAhealthcare

AI and HIPAA: What Healthcare Businesses Must Do Now

If an AI tool touches patient data at your healthcare organization, HIPAA applies — and most vendor contracts aren't written to cover it. Here's what you need before you deploy.

22 min read
bias auditNYC Local Law 144

What Is an AI Bias Audit and Does Your Business Need One?

New York City requires an annual test of any AI hiring tool to check whether it's filtering out one group of people more than others. If you hire in NYC, this isn't optional — here's what the audit actually involves.

21 min read

Get your compliance documentation done

Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.

Browse Compliance Packages