
Colorado's AI Law (SB 24-205): What It Requires and When It Takes Effect
Two-Sentence Summary
Colorado passed a law that creates specific rules for AI systems used to make important decisions about people's lives — like hiring, lending, insurance, housing, and healthcare. The law requires both the companies that build AI tools and the companies that use them to document risks, notify consumers, offer appeals, and report any discrimination to the Attorney General, with compliance required by June 30, 2026.
Colorado passed a law in 2024 that sets specific rules for how artificial intelligence can be used when the decisions it helps make affect important areas of people's lives — things like whether someone gets a loan, gets hired, gets insured, or gets access to housing, healthcare, or education.
The law is SB 24-205, officially titled the Consumer Protections for Artificial Intelligence Act. Governor Jared Polis signed it on May 17, 2024. It was originally set to take effect on February 1, 2026, but the legislature delayed that date during a special session in August 2025. The new effective date is June 30, 2026. (SB24-205) (SB25B-004)
If you develop or use AI systems that play a role in those kinds of decisions, this law applies to you. And it's structured differently from most other state AI and privacy laws, so even if you're familiar with what California or Virginia or Illinois have done, Colorado's approach is worth understanding on its own terms.
Why did Colorado SB 24-205's effective date change from February 1, 2026 to June 30, 2026?
During an August 2025 special session, Colorado lawmakers were unable to reach agreement on substantive revisions to SB 24-205. Rather than rush through unresolved amendments, they passed SB 25B-004, signed by Governor Polis on August 28, 2025, which extended the compliance deadline to June 30, 2026. No substantive requirements were modified — only the effective date changed across three sections of the statute.
This is important context, because a lot of what's been written about this law references the original February 2026 date.
In August 2025, Colorado held a special legislative session to revisit SB 24-205. Lawmakers had been working on substantive revisions to the law, but they weren't able to reach agreement on the changes. Instead of rushing through amendments that hadn't been fully worked out, they passed SB 25B-004, which extends the implementation deadline to June 30, 2026 to allow more time for review. Governor Polis signed it on August 28, 2025.
What this means for you: you have until June 30, 2026 to comply. That's real time to prepare, but it's not infinite — especially because the law's requirements involve building internal programs, not just filling out a form.
What makes Colorado SB 24-205 different from other state AI laws in how it handles developers and deployers?
Most state AI laws only regulate businesses that use AI. Colorado SB 24-205 separately obligates both the companies that build high-risk AI systems (developers) and the companies that deploy them in consumer-facing decisions (deployers). A business that builds and uses its own AI system is both simultaneously, with distinct compliance obligations for each role.
Most state AI laws focus on the business using the AI — the company that deploys it in front of consumers. Colorado does that too, but it also puts obligations on the companies that build and sell AI systems. The law uses two terms: "developer" and "deployer." (SB24-205)
A developer is the person or company that creates or substantially modifies a high-risk AI system. If you build AI tools and sell or license them to other businesses, you're a developer under this law.
A deployer is the person or company that uses a high-risk AI system. If you bought or licensed an AI tool and you're using it to help make decisions about consumers, you're a deployer.
This distinction matters because the law assigns different obligations to each role. If your business both builds AI tools and uses them to make consumer-facing decisions, you could be both.
What counts as a "high-risk" AI system under Colorado SB 24-205?
Under SB 24-205, a high-risk AI system is one that makes or is a substantial factor in a "consequential decision" — any decision materially affecting a consumer's access to or the cost of education, employment, financial or lending services, government services, healthcare, housing, insurance, or legal services. A system that ranks job applicants, prices insurance, evaluates creditworthiness, or triages patients is almost certainly high-risk. A marketing chatbot generally is not.
The law doesn't apply to every AI tool. It applies to "high-risk artificial intelligence systems" — which are AI systems that make, or are a substantial factor in making, what the law calls a "consequential decision." (SB24-205)
A consequential decision is a decision that materially affects a consumer's access to, or the cost or terms of: education or educational opportunities, employment or employment opportunities, financial or lending services, government services, healthcare services, housing, insurance, or legal services.
If an AI system is involved in any of those types of decisions — even if a human makes the final call — it's likely a high-risk system under this law.
Some examples of what that looks like in practice: an AI tool that screens job applicants and ranks them for a recruiter. A lending algorithm that evaluates creditworthiness. An insurance pricing model that uses consumer data to set rates. A healthcare platform that uses AI to prioritize or triage patients. A housing application screening tool that evaluates prospective tenants.
If the AI system is just a chatbot that helps customers navigate your website, or a tool that generates marketing copy, those probably don't qualify as high-risk — because they aren't substantially factoring into consequential decisions. The law targets the high-stakes uses, not every use.
What does Colorado SB 24-205 require AI developers to do before deployers can use their systems?
Developers under SB 24-205 must provide each deployer with a disclosure statement covering what the system does, how it works, its known limitations, and the documentation needed for the deployer's own impact assessment. Developers must also publish a public statement on algorithmic discrimination risk management and disclose to the Attorney General and all known deployers within 90 days if the system has caused or is likely to have caused algorithmic discrimination.
If you're a developer of a high-risk AI system, you have several obligations under the law. (SB24-205)
You need to provide deployers with a disclosure statement that includes specific information about your system — what it does, how it works, what its known limitations are, and what data it was trained on. You also need to provide the documentation a deployer would need to complete their own impact assessment.
You need to make a publicly available statement that summarizes the types of high-risk systems you develop or make available, and how you manage the risks of algorithmic discrimination in each of those systems.
And if you discover — or receive a credible report — that your high-risk system has caused or is reasonably likely to have caused algorithmic discrimination, you need to disclose that to the Attorney General and to all known deployers of the system within 90 days.
The law creates a rebuttable presumption that you used reasonable care if you complied with all of these requirements. That's a legal term that essentially means: if you did everything the law asks, the burden shifts to the other side to prove you didn't act reasonably. It's a meaningful protection for developers who follow the rules.
What does Colorado SB 24-205 require businesses that deploy high-risk AI systems to do?
Deployers under SB 24-205 must implement a formal risk management program, complete a documented impact assessment for each high-risk system before deployment and annually thereafter, notify consumers before AI makes or substantially factors into a consequential decision about them, provide rights to correct inaccurate data and to appeal adverse decisions with human review, publish a public statement on their AI risk management approach, and disclose discovered algorithmic discrimination to the Attorney General within 90 days.
If you're a deployer — meaning you're using a high-risk AI system in your business — your obligations are more extensive. (SB24-205)
Risk management policy and program. You need to implement a formal risk management policy for your high-risk AI systems. This isn't a one-time document — it's a program, meaning it needs to be maintained and updated over time.
Impact assessment. You need to complete an impact assessment for each high-risk system you deploy. The impact assessment evaluates the potential risks of algorithmic discrimination and the steps you're taking to mitigate them.
Annual review. Each year, you need to review each deployed high-risk system to confirm it isn't causing algorithmic discrimination. This means ongoing monitoring, not just an assessment at deployment time.
Consumer notice. If a high-risk system makes or substantially factors into a consequential decision about a consumer, you need to tell that consumer. The notice needs to be given before or at the time the decision is made. Our consumer notice kit includes the notice templates and delivery documentation the law requires for deployers.
Right to correct. Consumers must have the opportunity to correct any incorrect personal data that the AI system used in making a consequential decision about them.
Right to appeal. Consumers must have the opportunity to appeal an adverse consequential decision, with human review if that's technically feasible.
Public statement. Like developers, deployers need to publish a publicly available statement summarizing their high-risk systems, how they manage algorithmic discrimination risks, and the nature and extent of the data they collect and use.
Disclosure to the AG. If you discover that a high-risk system has caused algorithmic discrimination, you must report it to the Attorney General within 90 days.
That's a substantial list. The law is asking deployers to build a compliance infrastructure, not just fill out a form. The risk management program, the impact assessments, the annual reviews, the consumer notice and appeal processes — these are ongoing obligations that need to be part of how you operate.
What is the affirmative defense in Colorado SB 24-205 and how does it protect businesses?
SB 24-205 provides an affirmative defense for developers and deployers who (1) comply with a nationally or internationally recognized AI risk management framework and (2) take measures to discover and correct violations. Meeting both conditions creates a rebuttable presumption of reasonable care — shifting the enforcement burden to the other party. The NIST AI Risk Management Framework is the most prominent framework practitioners expect to qualify.
Here's something about Colorado's law that's genuinely helpful: it provides an affirmative defense for developers, deployers, and other parties. (SB24-205)
If you comply with a nationally or internationally recognized risk management framework for artificial intelligence — and you take measures to discover and correct violations — you have a legal defense against claims under this law. The most prominent framework that fits this description is the NIST AI Risk Management Framework, which is voluntary but widely referenced as a best practice. If you build your compliance program around an established framework like NIST AI RMF, you're in a stronger legal position. Our NIST AI RMF implementation package turns the 100-page framework into the specific policies and documentation that satisfy Colorado's affirmative defense standard.
This is the legislature's way of rewarding businesses that take a structured, good-faith approach to AI risk management. It doesn't make you immune from enforcement, but it gives you meaningful legal protection.
Does Colorado SB 24-205 require businesses to disclose when consumers are interacting with an AI system?
Yes — SB 24-205 contains a separate provision requiring any business doing business in Colorado to disclose to consumers that they are interacting with an AI system, whenever an AI is deployed or made available to interact with consumers. This requirement is not limited to high-risk systems — it applies to all consumer-facing AI interactions, including chatbots and virtual assistants.
Separate from the high-risk system requirements, the law has a broader provision that applies to anyone doing business in Colorado: if you deploy or make available an AI system that is intended to interact with consumers, you must disclose to each consumer that they are interacting with an AI system. (SB24-205)
This one is straightforward. If you have a chatbot, a virtual assistant, or any other AI-powered interface that consumers interact with, you need to tell them it's AI. This requirement isn't limited to high-risk systems — it applies to all consumer-facing AI interactions.
Are insurance companies and banks exempt from Colorado SB 24-205's AI requirements?
SB 24-205 includes conditional exemptions for two industries: insurers already subject to Colorado's laws governing use of external consumer data, algorithms, and predictive models are deemed in full compliance; and banks, credit unions, and their affiliates subject to examination by a state or federal prudential regulator under qualifying published guidance on high-risk AI systems are also deemed compliant. Both exemptions have conditions — confirm applicability specifically before relying on them.
Two industries have specific exemptions built into the law. (SB24-205)
Insurers, fraternal benefit societies, and developers of AI systems used by insurers are considered in full compliance if they're already subject to Colorado's existing laws governing insurers' use of external consumer data, algorithms, and predictive models — plus any rules adopted by the Commissioner of Insurance.
Banks, credit unions (state-chartered and federal), and their affiliates and subsidiaries are in full compliance if they're subject to examination by a state or federal prudential regulator under published guidance or regulations that apply to high-risk AI systems, provided that guidance meets criteria specified in the law.
If you're in insurance or banking, you may already be covered by existing regulatory frameworks. But it's worth confirming specifically, because the exemptions have conditions attached.
How is Colorado SB 24-205 enforced, and what are the penalties for violations?
The Colorado Attorney General has exclusive enforcement authority under SB 24-205 and can bring civil actions without completing rulemaking first. Violations are treated as deceptive trade practices under the Colorado Consumer Protection Act, exposing businesses to civil penalties up to $20,000 per violation (up to $50,000 when the consumer is age 60 or older), injunctive relief, and recovery of costs and attorney fees. Each violation is assessed separately per consumer affected.
The Colorado Attorney General has exclusive authority to enforce this law and to make rules for its implementation. A violation of SB 24-205 is treated as a deceptive trade practice under the Colorado Consumer Protection Act. (SB24-205)
The Colorado Consumer Protection Act provides for injunctive relief, civil penalties, and the recovery of costs and attorney fees. The penalties can be significant, particularly for repeat violations or violations that affect a large number of consumers.
The AG also has rulemaking authority, which means additional detailed requirements could be coming. As of now, the statute itself is the primary source of obligations, but it's worth watching for AG rulemaking activity as the June 2026 effective date approaches.
What does "algorithmic discrimination" mean under Colorado SB 24-205?
Under SB 24-205, algorithmic discrimination is a condition where an AI system contributes to differential treatment or impact that disfavors a person or class based on protected characteristics — including race, color, ethnicity, sex, religion, national origin, age, disability, veteran status, sexual orientation, and gender identity. The law targets unintended discriminatory outcomes from biased training data or flawed model design, not just intentional discrimination.
The core concern of this law is algorithmic discrimination. That's defined as a condition where an AI system contributes to differential treatment or impact that disfavors a person or class of people based on protected characteristics. (SB24-205)
Protected characteristics under Colorado law include race, color, ethnicity, sex, religion, national origin, age, disability, veteran status, sexual orientation, gender identity, and other categories recognized under state or federal anti-discrimination law.
The law isn't saying businesses can't use AI. It's saying that if you use AI in high-stakes decisions, you have a responsibility to make sure it isn't discriminating against people. And the way you demonstrate that responsibility is through documented policies, assessments, monitoring, and transparency.
That's a fair ask. AI systems can produce discriminatory outcomes even when no one intended them to — often because of biases in training data, flawed assumptions in model design, or gaps in testing. The law is designed to make sure businesses catch and address those problems, rather than discovering them after the harm is done.
Where should businesses start to comply with Colorado SB 24-205 before the June 30, 2026 deadline?
Start by mapping each AI system your business builds or uses to the consequential decision categories in SB 24-205 to identify which are high-risk. Deployers should then build a risk management program as the foundation for impact assessments and annual reviews. Developers should prepare the disclosure package deployer-customers will need. All parties should align with a recognized risk management framework to qualify for the affirmative defense before June 30.
If your business develops or uses high-risk AI systems as defined by this law, here's a practical starting point.
Identify which of your AI systems qualify as high-risk. Map each system to the "consequential decision" categories in the law. If a system is involved in decisions about employment, lending, insurance, housing, healthcare, education, or legal services, it's in scope.
If you're a deployer, start building your risk management program. This is the foundation everything else rests on. It should include your policies for evaluating AI systems before deployment, your process for completing impact assessments, and your plan for annual reviews.
If you're a developer, prepare the documentation that deployers will need. The law requires you to give deployers the information and tools they need to complete their own impact assessments. Start thinking about what that documentation looks like for your systems.
Look into established risk management frameworks. The NIST AI Risk Management Framework is a good starting point, and using a recognized framework gives you the benefit of the law's affirmative defense.
Plan your consumer notice and appeal processes. These need to be in place by June 30, 2026. They don't need to be complicated, but they do need to exist and they need to work.
The effective date is June 30, 2026. That's roughly three and a half months from now. If you haven't started, this is the time.
Sources — Every fact in this article was verified against the enacted statute text at these .gov URLs:
- Colorado General Assembly — SB24-205 Consumer Protections for Artificial Intelligence — Full bill summary including developer/deployer obligations, definitions, affirmative defense, exemptions, and enforcement. Signed May 17, 2024.
- Colorado General Assembly — SB25B-004 Increase Transparency for Algorithmic Systems — Extends effective date to June 30, 2026. Passed during 2025 Extraordinary Session. Signed August 28, 2025.
What Is an Affirmative Defense?
Imagine you're playing a game at recess and someone accuses you of breaking a rule. Normally, you'd just say 'No I didn't' and try to prove you're innocent. But what if the game had a special rule that said: 'If you were wearing the official jersey and following the referee's playbook, the other team has to prove you cheated — not the other way around.' That's basically what an affirmative defense is. It doesn't mean you can't get in trouble at all. It means that if you followed a specific set of steps, the burden flips — and the person accusing you has a much harder job proving their case.
In Colorado's AI law, the affirmative defense works like this: if your business follows a recognized AI risk management framework — like the one published by the National Institute of Standards and Technology (NIST) — and you also take genuine steps to find and fix any problems with your AI systems, the law gives you a legal shield. You're not automatically off the hook, but you get a 'rebuttable presumption' that you used reasonable care. That means anyone trying to take legal action against you starts at a disadvantage because they have to overcome the assumption that you did the right thing.
This is a big deal because most AI laws just tell you what you can't do — and then punish you if you mess up. Colorado's law actually tells you what you can do to protect yourself. It's like the difference between a teacher who only marks wrong answers versus one who says 'follow this study guide, and you'll get extra credit on the test.' It rewards businesses that take AI safety seriously rather than just punishing the ones that don't.
The practical takeaway is this: if you're a business covered by Colorado's law, building your compliance program around an established framework like NIST's AI Risk Management Framework isn't just good practice — it's a legal strategy that gives you real protection if something goes wrong.
4 facts
Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.
More from the blog
Texas TRAIGA Has Been Live for 4 Months. Here's What the AG Is Doing — and What You Should Be Ready For.
Texas TRAIGA has been live for 4 months. Zero public AG enforcement so far. The complaint portal launches September 1, 2026 — and what you have documented before that matters more than what you do after.
Colorado's AI Law Takes Effect June 30, 2026. Here's What It Requires.
Colorado's AI law takes effect June 30, 2026. No amending bill has been introduced. The legislature has failed to revise the law four times. The deadline is real.
Workday AI Hiring Lawsuit Could Reshape Employer Liability
A federal court is testing whether AI vendors — not just employers — can be sued for discriminatory hiring outcomes. The certified class could include hundreds of millions of applicants.
Colorado AI Compliance for HR Software Companies: What SB 24-205 Means for Your Product
HR software that screens candidates, scores performance, or ranks employees is classified as high-risk AI under Colorado's law. The June 30, 2026 deadline applies to both the companies that build these tools and the HR teams that use them.
Do I Need AI Compliance? A Decision Framework for Every Business Using AI
Not sure if AI compliance applies to your business? Walk through four questions — and know exactly which laws apply, which documents you need, and where to start.
Operating in Multiple States? Here's How AI Compliance Stacks Up Across 15 Jurisdictions
Colorado, California, Texas, Illinois, and NYC all have active AI laws — and they don't all require the same things. If you operate in multiple states, here's what applies to you and why.
Oregon Consumer Privacy Act: What Your Business Needs to Know About AI Profiling Requirements
Oregon's privacy law has been in effect since July 2024, requires data protection assessments for AI profiling, and flatly prohibits processing personal data of consumers under 16 for targeted advertising or data sales — a protection not found in most other state laws. The 30-day cure period effectively expired for most businesses on January 1, 2026 (Oregon Laws 2025, c.417).
What Is an AI Impact Assessment? The Document Every State Law Now Requires
Colorado, California, and Illinois all require some version of an AI impact assessment — but they don't call it the same thing or require the same format. Here's what every version has in common, and what each state specifically demands.
What Is a High-Risk AI System? A Plain-Language Guide for Business Owners
Three different laws. Three different definitions of 'high-risk AI.' If your business uses AI to make decisions about people, here's how to figure out which rules apply to you.
The Federal Government Quietly Removed Its AI Hiring Guidance. Four States Are Writing Their Own.
The federal government removed every page of AI hiring guidance it ever published. Over a year later, the pages are still down. Four states wrote their own — and none of them agree.
AI governance framework checklist: what every enacted state law actually requires
Colorado, Texas, and Illinois all passed AI laws with deadlines in early 2026 — and none of them are identical. Here's the one compliance checklist that covers all three at once.
You're HIPAA-Compliant. That's Not Enough Anymore.
HIPAA protects patient records. It has nothing to say about whether the AI making decisions about those patients is fair. New rules are filling that gap — and they apply to you even if your HIPAA program is airtight.
The NIST AI Risk Management Framework: What It Is and Why Colorado Made It a Legal Shield
The US government published a free framework for managing AI risk — and Colorado's AI law turns following it into a legal shield. If something goes wrong with your AI, this is the document that shifts the burden of proof.
Texas TRAIGA (HB 149): What the Texas Responsible AI Governance Act Requires and How to Comply
Texas passed an AI law that applies to every business — no exemptions for small companies, no carveout for low-risk tools. It's already in effect, and a single uncurable violation starts at $80,000.
What Does AI Compliance Actually Cost a Small Business in 2026?
AI compliance can cost $49 or $50,000 — depending on what you actually need. Here's what each option costs in real numbers, so you can stop guessing and start budgeting.
AI Compliance Penalties by State: What Happens If You Ignore the Law
"Per violation" sounds like one fine. It isn't. Here's what the penalty math actually looks like state by state — and why the numbers can compound into company-ending territory fast.
AI and HIPAA: What Healthcare Businesses Must Do Now
If an AI tool touches patient data at your healthcare organization, HIPAA applies — and most vendor contracts aren't written to cover it. Here's what you need before you deploy.
EU AI Act Compliance Checklist: What US Businesses Need Before August 2026
Europe's AI law applies to US companies — even ones with no European office. If your AI is used by anyone in the EU, the deadline is August 2026 and the fines are calculated on your global revenue.
ISO 42001: The AI Certification Your Enterprise Clients Will Soon Require
Enterprise clients are starting to require ISO 42001 certification before they'll buy AI products — the same way they require SOC 2. Here's what the standard actually requires and why getting it early is a competitive advantage.
What Is an AI Bias Audit and Does Your Business Need One?
New York City requires an annual test of any AI hiring tool to check whether it's filtering out one group of people more than others. If you hire in NYC, this isn't optional — here's what the audit actually involves.
Get your compliance documentation done
Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.
Browse Compliance Packages












