Skip to main content
Back to Blog
Colorado's AI Law (SB 24-205): What It Requires and When It Takes Effect
ColoradoSB 24-205AI regulationalgorithmic discriminationhigh-risk AI

Colorado's AI Law (SB 24-205): What It Requires and When It Takes Effect

AI Compliance Documents Team15 min read

Two-Sentence Summary

Colorado passed a law that creates specific rules for AI systems used to make important decisions about people's lives — like hiring, lending, insurance, housing, and healthcare. The law requires both the companies that build AI tools and the companies that use them to document risks, notify consumers, offer appeals, and report any discrimination to the Attorney General, with compliance required by June 30, 2026.

Colorado passed a law in 2024 that sets specific rules for how artificial intelligence can be used when the decisions it helps make affect important areas of people's lives — things like whether someone gets a loan, gets hired, gets insured, or gets access to housing, healthcare, or education.

The law is SB 24-205, officially titled the Consumer Protections for Artificial Intelligence Act. Governor Jared Polis signed it on May 17, 2024. It was originally set to take effect on February 1, 2026, but the legislature delayed that date during a special session in August 2025. The new effective date is June 30, 2026. (SB24-205) (SB25B-004)

If you develop or use AI systems that play a role in those kinds of decisions, this law applies to you. And it's structured differently from most other state AI and privacy laws, so even if you're familiar with what California or Virginia or Illinois have done, Colorado's approach is worth understanding on its own terms.

Why the Effective Date Changed

This is important context, because a lot of what's been written about this law references the original February 2026 date.

In August 2025, Colorado held a special legislative session to revisit SB 24-205. Lawmakers had been working on substantive revisions to the law, but they weren't able to reach agreement on the changes. Instead of rushing through amendments that hadn't been fully worked out, they passed SB 25B-004, which extends the implementation deadline to June 30, 2026 to allow more time for review. Governor Polis signed it on August 28, 2025.

What this means for you: you have until June 30, 2026 to comply. That's real time to prepare, but it's not infinite — especially because the law's requirements involve building internal programs, not just filling out a form.

What Makes This Law Different: Developers and Deployers

Most state AI laws focus on the business using the AI — the company that deploys it in front of consumers. Colorado does that too, but it also puts obligations on the companies that build and sell AI systems. The law uses two terms: "developer" and "deployer." (SB24-205)

A developer is the person or company that creates or substantially modifies a high-risk AI system. If you build AI tools and sell or license them to other businesses, you're a developer under this law.

A deployer is the person or company that uses a high-risk AI system. If you bought or licensed an AI tool and you're using it to help make decisions about consumers, you're a deployer.

This distinction matters because the law assigns different obligations to each role. If your business both builds AI tools and uses them to make consumer-facing decisions, you could be both.

What's a "High-Risk" AI System?

The law doesn't apply to every AI tool. It applies to "high-risk artificial intelligence systems" — which are AI systems that make, or are a substantial factor in making, what the law calls a "consequential decision." (SB24-205)

A consequential decision is a decision that materially affects a consumer's access to, or the cost or terms of: education or educational opportunities, employment or employment opportunities, financial or lending services, government services, healthcare services, housing, insurance, or legal services.

If an AI system is involved in any of those types of decisions — even if a human makes the final call — it's likely a high-risk system under this law.

Some examples of what that looks like in practice: an AI tool that screens job applicants and ranks them for a recruiter. A lending algorithm that evaluates creditworthiness. An insurance pricing model that uses consumer data to set rates. A healthcare platform that uses AI to prioritize or triage patients. A housing application screening tool that evaluates prospective tenants.

If the AI system is just a chatbot that helps customers navigate your website, or a tool that generates marketing copy, those probably don't qualify as high-risk — because they aren't substantially factoring into consequential decisions. The law targets the high-stakes uses, not every use.

What Developers Have to Do

If you're a developer of a high-risk AI system, you have several obligations under the law. (SB24-205)

You need to provide deployers with a disclosure statement that includes specific information about your system — what it does, how it works, what its known limitations are, and what data it was trained on. You also need to provide the documentation a deployer would need to complete their own impact assessment.

You need to make a publicly available statement that summarizes the types of high-risk systems you develop or make available, and how you manage the risks of algorithmic discrimination in each of those systems.

And if you discover — or receive a credible report — that your high-risk system has caused or is reasonably likely to have caused algorithmic discrimination, you need to disclose that to the Attorney General and to all known deployers of the system within 90 days.

The law creates a rebuttable presumption that you used reasonable care if you complied with all of these requirements. That's a legal term that essentially means: if you did everything the law asks, the burden shifts to the other side to prove you didn't act reasonably. It's a meaningful protection for developers who follow the rules.

What Deployers Have to Do

If you're a deployer — meaning you're using a high-risk AI system in your business — your obligations are more extensive. (SB24-205)

Risk management policy and program. You need to implement a formal risk management policy for your high-risk AI systems. This isn't a one-time document — it's a program, meaning it needs to be maintained and updated over time.

Impact assessment. You need to complete an impact assessment for each high-risk system you deploy. The impact assessment evaluates the potential risks of algorithmic discrimination and the steps you're taking to mitigate them.

Annual review. Each year, you need to review each deployed high-risk system to confirm it isn't causing algorithmic discrimination. This means ongoing monitoring, not just an assessment at deployment time.

Consumer notice. If a high-risk system makes or substantially factors into a consequential decision about a consumer, you need to tell that consumer. The notice needs to be given before or at the time the decision is made. Our consumer notice kit includes the notice templates and delivery documentation the law requires for deployers.

Right to correct. Consumers must have the opportunity to correct any incorrect personal data that the AI system used in making a consequential decision about them.

Right to appeal. Consumers must have the opportunity to appeal an adverse consequential decision, with human review if that's technically feasible.

Public statement. Like developers, deployers need to publish a publicly available statement summarizing their high-risk systems, how they manage algorithmic discrimination risks, and the nature and extent of the data they collect and use.

Disclosure to the AG. If you discover that a high-risk system has caused algorithmic discrimination, you must report it to the Attorney General within 90 days.

That's a substantial list. The law is asking deployers to build a compliance infrastructure, not just fill out a form. The risk management program, the impact assessments, the annual reviews, the consumer notice and appeal processes — these are ongoing obligations that need to be part of how you operate.

The Affirmative Defense

Here's something about Colorado's law that's genuinely helpful: it provides an affirmative defense for developers, deployers, and other parties. (SB24-205)

If you comply with a nationally or internationally recognized risk management framework for artificial intelligence — and you take measures to discover and correct violations — you have a legal defense against claims under this law. The most prominent framework that fits this description is the NIST AI Risk Management Framework, which is voluntary but widely referenced as a best practice. If you build your compliance program around an established framework like NIST AI RMF, you're in a stronger legal position. Our NIST AI RMF implementation package turns the 100-page framework into the specific policies and documentation that satisfy Colorado's affirmative defense standard.

This is the legislature's way of rewarding businesses that take a structured, good-faith approach to AI risk management. It doesn't make you immune from enforcement, but it gives you meaningful legal protection.

AI Interaction Disclosure

Separate from the high-risk system requirements, the law has a broader provision that applies to anyone doing business in Colorado: if you deploy or make available an AI system that is intended to interact with consumers, you must disclose to each consumer that they are interacting with an AI system. (SB24-205)

This one is straightforward. If you have a chatbot, a virtual assistant, or any other AI-powered interface that consumers interact with, you need to tell them it's AI. This requirement isn't limited to high-risk systems — it applies to all consumer-facing AI interactions.

Exemptions for Insurance and Banking

Two industries have specific exemptions built into the law. (SB24-205)

Insurers, fraternal benefit societies, and developers of AI systems used by insurers are considered in full compliance if they're already subject to Colorado's existing laws governing insurers' use of external consumer data, algorithms, and predictive models — plus any rules adopted by the Commissioner of Insurance.

Banks, credit unions (state-chartered and federal), and their affiliates and subsidiaries are in full compliance if they're subject to examination by a state or federal prudential regulator under published guidance or regulations that apply to high-risk AI systems, provided that guidance meets criteria specified in the law.

If you're in insurance or banking, you may already be covered by existing regulatory frameworks. But it's worth confirming specifically, because the exemptions have conditions attached.

How Enforcement Works

The Colorado Attorney General has exclusive authority to enforce this law and to make rules for its implementation. A violation of SB 24-205 is treated as a deceptive trade practice under the Colorado Consumer Protection Act. (SB24-205)

The Colorado Consumer Protection Act provides for injunctive relief, civil penalties, and the recovery of costs and attorney fees. The penalties can be significant, particularly for repeat violations or violations that affect a large number of consumers.

The AG also has rulemaking authority, which means additional detailed requirements could be coming. As of now, the statute itself is the primary source of obligations, but it's worth watching for AG rulemaking activity as the June 2026 effective date approaches.

What "Algorithmic Discrimination" Means

The core concern of this law is algorithmic discrimination. That's defined as a condition where an AI system contributes to differential treatment or impact that disfavors a person or class of people based on protected characteristics. (SB24-205)

Protected characteristics under Colorado law include race, color, ethnicity, sex, religion, national origin, age, disability, veteran status, sexual orientation, gender identity, and other categories recognized under state or federal anti-discrimination law.

The law isn't saying businesses can't use AI. It's saying that if you use AI in high-stakes decisions, you have a responsibility to make sure it isn't discriminating against people. And the way you demonstrate that responsibility is through documented policies, assessments, monitoring, and transparency.

That's a fair ask. AI systems can produce discriminatory outcomes even when no one intended them to — often because of biases in training data, flawed assumptions in model design, or gaps in testing. The law is designed to make sure businesses catch and address those problems, rather than discovering them after the harm is done.

Where to Start

If your business develops or uses high-risk AI systems as defined by this law, here's a practical starting point.

Identify which of your AI systems qualify as high-risk. Map each system to the "consequential decision" categories in the law. If a system is involved in decisions about employment, lending, insurance, housing, healthcare, education, or legal services, it's in scope.

If you're a deployer, start building your risk management program. This is the foundation everything else rests on. It should include your policies for evaluating AI systems before deployment, your process for completing impact assessments, and your plan for annual reviews.

If you're a developer, prepare the documentation that deployers will need. The law requires you to give deployers the information and tools they need to complete their own impact assessments. Start thinking about what that documentation looks like for your systems.

Look into established risk management frameworks. The NIST AI Risk Management Framework is a good starting point, and using a recognized framework gives you the benefit of the law's affirmative defense.

Plan your consumer notice and appeal processes. These need to be in place by June 30, 2026. They don't need to be complicated, but they do need to exist and they need to work.

The effective date is June 30, 2026. That's roughly three and a half months from now. If you haven't started, this is the time.


Sources — Every fact in this article was verified against the enacted statute text at these .gov URLs:

What Is an Affirmative Defense?
Imagine you're playing a game at recess and someone accuses you of breaking a rule. Normally, you'd just say 'No I didn't' and try to prove you're innocent. But what if the game had a special rule that said: 'If you were wearing the official jersey and following the referee's playbook, the other team has to prove you cheated — not the other way around.' That's basically what an affirmative defense is. It doesn't mean you can't get in trouble at all. It means that if you followed a specific set of steps, the burden flips — and the person accusing you has a much harder job proving their case. In Colorado's AI law, the affirmative defense works like this: if your business follows a recognized AI risk management framework — like the one published by the National Institute of Standards and Technology (NIST) — and you also take genuine steps to find and fix any problems with your AI systems, the law gives you a legal shield. You're not automatically off the hook, but you get a 'rebuttable presumption' that you used reasonable care. That means anyone trying to take legal action against you starts at a disadvantage because they have to overcome the assumption that you did the right thing. This is a big deal because most AI laws just tell you what you can't do — and then punish you if you mess up. Colorado's law actually tells you what you can do to protect yourself. It's like the difference between a teacher who only marks wrong answers versus one who says 'follow this study guide, and you'll get extra credit on the test.' It rewards businesses that take AI safety seriously rather than just punishing the ones that don't. The practical takeaway is this: if you're a business covered by Colorado's law, building your compliance program around an established framework like NIST's AI Risk Management Framework isn't just good practice — it's a legal strategy that gives you real protection if something goes wrong.
4 facts

Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.

More from the blog

Get your compliance documentation done

Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.

Browse Compliance Packages