
The NIST AI Risk Management Framework: What It Is and Why Colorado Made It a Legal Shield
Two-Sentence Summary
The NIST AI Risk Management Framework is a voluntary 48-page document published by the US government in January 2023 that organizes AI governance into four functions: Govern, Map, Measure, and Manage. Colorado's AI law gives businesses that follow a framework like this one a legal defense — meaning if something goes wrong with your AI system and you followed the NIST framework, the burden shifts to the other side to prove you were negligent.
You've probably heard someone say "just follow NIST" when the topic of AI compliance comes up. That advice is everywhere right now — in legal memos, vendor pitches, state legislative hearings, procurement checklists. But most of the people saying it can't tell you what the framework actually contains, what following it looks like in practice, or why a Colorado law made it a statutory legal defense.
This article is about what the NIST AI RMF actually is, what its four functions require you to do, and why it now matters beyond voluntary good practice. If you're trying to understand whether your AI governance program is defensible — legally and operationally — this is the foundation.
What the AI RMF Is
The NIST AI Risk Management Framework — published as NIST AI 100-1 — is a 48-page document released by the National Institute of Standards and Technology in January 2023. It is voluntary. No federal law requires you to follow it. No penalty attaches to ignoring it.
What it is, specifically, is the US government's attempt to create a shared, consensus-driven structure for managing AI risks before the regulatory landscape fragmented into dozens of incompatible state and agency frameworks. NIST built it through 18 months of public engagement with more than 240 contributing organizations — businesses, researchers, civil society groups, and government agencies. The goal was a framework any organization could use regardless of industry, size, or which AI systems they deploy.
The framework distinguishes itself from compliance checklists by being explicitly outcome-oriented. It doesn't tell you which specific controls to implement. It describes the categories of activity that constitute responsible AI risk management, and then lets organizations determine the right implementation for their context. That flexibility is a feature, not a gap.
The framework has two components: the Core and the Profiles. The Core is the structured set of functions, categories, and subcategories that describe what AI risk management involves. Profiles are how organizations apply the Core to their specific context — mapping current state against desired state, identifying gaps, and building a roadmap. For most businesses, getting into the Core is the essential first step.
The Four Functions
The AI RMF Core organizes everything into four functions. (AI RMF Core, Section 5) These are not sequential steps — they are ongoing, interdependent activities that a mature AI governance program runs in parallel. But for understanding what the framework asks, walking through each function in order is the right starting point.
Govern
Govern is the organizational foundation. It covers the policies, processes, roles, culture, and oversight structures that determine how your organization thinks about AI risk at the institutional level — before any specific AI system is in scope.
In practice, Govern asks: Does your organization have a documented AI risk policy? Is there someone accountable for AI risk at the leadership level? Do employees who work with AI systems understand your organization's expectations for responsible AI? Is AI risk integrated into enterprise risk management, or handled as a sidecar?
Govern is where most organizations have the most work to do, and where skipping ahead causes problems everywhere else. You cannot Map risks systematically if no one owns the process. You cannot Manage what you haven't measured. Govern is the room where the rest of the work gets its authority.
Map
Map is where you identify and contextualize the risks associated with your specific AI systems. Before you can measure a risk, you have to know it exists, understand the context it operates in, and have a way to talk about it across your organization.
Map asks: What AI systems do you have? What decisions do they influence? Who is affected? What are the potential harms — to individuals, to groups, to your organization? What data does each system use, and what biases might that data carry? What is the intended use, and what are the foreseeable misuses?
The Map function is also where the framework asks organizations to consider societal and third-party risks, not just internal operational risks. An AI system that performs well by internal metrics can still harm people outside your organization. Map is where that possibility gets surfaced before deployment, not discovered after.
Measure
Measure is where you test, evaluate, and quantify the risks you identified in Map. The framework does not prescribe specific metrics — it establishes that measurement must happen through structured, repeatable processes, and that the results must be documented and communicated.
Measure asks: How are you evaluating your AI systems for bias, accuracy, robustness, and reliability? Are your evaluation methods appropriate for the risk level of each system? Are the people doing the evaluation sufficiently independent from the people who built the system? When you find a problem in testing, how does that result get escalated and acted on?
For many organizations, the Measure function reveals that informal processes ("we tested it and it seemed fine") don't satisfy the framework's expectations for rigor. Structured evaluation, documented results, and clear escalation paths are what Measure requires.
Manage
Manage is where you act on what you've measured. It covers the ongoing work of mitigating identified risks, monitoring deployed systems, responding to incidents, and updating your risk posture as systems and contexts change.
Manage asks: What happens when an AI system produces a harmful outcome? Do you have a process to detect it, respond to it, and correct it? Do you have a plan to decommission or modify a system that is no longer performing as intended? Are you monitoring for emergent risks — things that weren't visible at deployment but develop over time? Do you track AI-related incidents and feed that information back into future AI development and deployment decisions?
Manage is also where the framework intersects most directly with legal obligations like Colorado's. The requirement to take measures to discover and correct violations — which is the other half of Colorado's affirmative defense standard — maps directly to what Manage describes.
Why This Matters Legally Now
The NIST AI RMF was designed as a voluntary tool. But state legislatures, writing AI laws that needed a compliance benchmark, looked at the framework and found exactly what they needed: a credible, consensus-driven, publicly available standard with a structure comprehensive enough to serve as a legal benchmark.
Colorado SB 24-205 is the clearest example. The law provides an affirmative defense at C.R.S. section 6-1-1706(3) for developers and deployers of high-risk AI systems who comply with a nationally or internationally recognized AI risk management framework and take measures to discover and correct algorithmic discrimination. The NIST AI RMF is the most prominent framework that fits that description. If you implement the framework and something goes wrong, you have a documented legal defense — the burden shifts to the other party to prove your compliance was inadequate.
Colorado's effective date is June 30, 2026. (SB25B-004) That's the compliance deadline for deployers of high-risk AI systems in the state. If your AI systems make or substantially factor into decisions about employment, lending, insurance, housing, healthcare, education, or legal services in Colorado, that date matters to you. Our Colorado SB 24-205 compliance package includes the impact assessment and risk management documentation the law requires for deployers.
Texas HB 149 takes a similar approach and references NIST's Generative AI Profile — a companion document to the AI RMF that applies the framework's structure specifically to large language models and other generative AI systems. The pattern is consistent: legislatures are writing the NIST framework into their enforcement structure, not as a mandate, but as the standard that earns you legal protection.
What this means in practice: following the NIST AI RMF is no longer just a governance best practice. For any business operating AI systems in covered categories in Colorado, it is the most direct path to the statutory affirmative defense the legislature explicitly provided.
AI RMF vs. ISO/IEC 42001
If you've read our post on ISO 42001, you may be wondering how the two frameworks relate and which one you should focus on. The short answer is that they are complementary, not competing, and the most defensible compliance programs use both.
ISO/IEC 42001 is a management system standard — it defines the organizational infrastructure (policies, processes, documentation, internal audit, management review) that must exist to govern AI responsibly. The NIST AI RMF is a risk management framework — it defines what risk identification, evaluation, and management activity looks like across the AI system lifecycle.
ISO 42001 asks: does your organization have the governance infrastructure to manage AI risk? The NIST AI RMF asks: are you actually identifying, measuring, and managing the specific risks your AI systems pose?
Both questions need answers. An organization with mature NIST AI RMF implementation but no management system infrastructure is doing good risk work without the organizational foundation to sustain it. An organization with a robust ISO 42001 management system but shallow NIST AI RMF implementation has the structure without the substance.
NIST has published crosswalk documents that explicitly map the AI RMF to ISO 42001 and other international standards. (NIST AIRC Crosswalk Documents) Microsoft and other large organizations have published their own crosswalk analyses showing how AI RMF implementation satisfies ISO 42001 requirements in practice. The two frameworks were designed in parallel and are intentionally consistent. Organizations that implement one are building the foundation for the other.
For Colorado's affirmative defense specifically, the NIST AI RMF is the more direct reference — it is the framework Colorado's legislative history points to most explicitly. For enterprise procurement and third-party verification, ISO 42001 certification provides the external evidence that documented AI governance is real. Our AI Governance Framework package is designed to satisfy both frameworks through a single documentation structure.
What to Do With This
If your business uses AI systems in consequential decisions — employment, lending, housing, healthcare, insurance, education, legal services — here is the practical sequence.
Start with Govern. Document your organization's AI risk policy. Assign clear ownership for AI risk management at the leadership level. Establish a process for reviewing AI systems before deployment. These are Govern activities, and they are the prerequisite for everything else. Our NIST AI RMF implementation package includes the policy templates, roles and responsibilities documentation, and governance structure required to satisfy the Govern function.
Map your AI systems. Create an inventory of every AI system your organization uses in covered decision categories. For each one, document what decisions it influences, who is affected, what data it uses, and what the known risk surface is. This is the Map function, and it is also the foundation of the impact assessment that Colorado's law requires deployers to complete.
Build your Measure and Manage processes. Define how you will evaluate each system for bias, accuracy, and reliability. Define what happens when something goes wrong. Document both. These processes need to exist and be followed — not just described in a policy document — for the affirmative defense to be credible.
Understand your Colorado obligations specifically. If you deploy high-risk AI systems as defined by SB 24-205, you have documentation, notice, and assessment obligations that go beyond the NIST framework itself. The law requires impact assessments, consumer notices, annual reviews, and disclosure to the Attorney General if you discover algorithmic discrimination. Our Colorado SB 24-205 package covers those specific requirements.
The NIST AI RMF is not a compliance destination. It is the structure that makes compliance legible — to regulators, to enterprise procurement teams, and to courts. Building your program around it is how you turn good intentions about AI responsibility into documented, defensible practice.
Sources — Every factual claim about the framework and applicable laws in this article was verified against primary sources at these URLs:
- NIST AI 100-1 — Artificial Intelligence Risk Management Framework 1.0 — The framework document itself. Published January 2023.
- NIST AIRC — AI RMF Core, Section 5 — Functions, categories, and subcategories of the Core.
- NIST AIRC — Crosswalk Documents — Mappings from the AI RMF to ISO 42001 and international standards.
- Colorado General Assembly — SB24-205 — Affirmative defense provision at C.R.S. section 6-1-1706(3).
- Colorado General Assembly — SB25B-004 — Effective date extension to June 30, 2026.
What Is the NIST AI Risk Management Framework?
4 facts
- [1]NIST AI 100-1 — Artificial Intelligence Risk Management Framework 1.0 (DOI) (opens in new tab)
- [2]AI RMF Core — Functions, Categories, and Subcategories (NIST AIRC) (opens in new tab)
- [3]Crosswalk Documents — AI RMF Mapped to International Standards (NIST AIRC) (opens in new tab)
- [4]NIST Artificial Intelligence — Official NIST AI Program Page (opens in new tab)
- [5]Colorado General Assembly — SB24-205: Consumer Protections for Artificial Intelligence (opens in new tab)
- [6]Brownstein Hyatt — Colorado's Landmark AI Law: What Developers and Deployers Should Know (opens in new tab)
Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.
More from the blog
AI governance framework checklist: what every enacted state law actually requires
You're HIPAA-Compliant. That's Not Enough Anymore.
Get your compliance documentation done
Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.
Browse Compliance Packages