
You're HIPAA-Compliant. That's Not Enough Anymore.
Two-Sentence Summary
HIPAA governs how patient data is protected. It does not govern whether your AI system discriminates, how you document algorithmic bias risk, or whether patients have the right to appeal a decision an AI helped make. New state laws and federal agency rules are filling that gap — and healthcare organizations that stop at HIPAA are already behind.
You're HIPAA-compliant. Your Business Associate Agreements are signed. Your risk analysis is current. Your breach notification procedures are documented. That's real work, and it matters.
It's also not the full picture anymore.
HIPAA was designed to protect health information — to ensure that data about patients is kept confidential, transmitted securely, and disclosed only in permitted ways. It does that well. What it was not designed to do is regulate whether your AI system makes biased clinical recommendations, whether patients have the right to appeal a coverage decision an algorithm helped make, or whether you've documented the discrimination risk your AI tools carry. Those obligations are coming from somewhere else — from state legislatures, from CMS, from HHS, and from the FDA — and healthcare organizations that are reading their HIPAA checklist and calling it done are building a compliance gap they may not see until it becomes a problem.
What Changed
HIPAA hasn't changed. What changed is everything around it.
In the last two years, federal agencies and state legislatures have layered AI-specific requirements on top of HIPAA that address territory the original statute never contemplated. These aren't variations of HIPAA. They're different laws with different structures, different obligations, and different enforcement mechanisms. The fact that you meet HIPAA's requirements for the same AI system doesn't mean you meet these requirements.
Healthcare is one of the domains most explicitly targeted by new AI regulation — and for a straightforward reason. The FDA has authorized 1,430 AI-enabled medical devices as of March 2026. AI is no longer an edge case in healthcare; it's embedded in radiology, cardiology, prior authorization, clinical documentation, patient triage, and care management at scale. Regulators have noticed.
What the New State Laws Require
Colorado's SB 24-205 is the most comprehensive state AI law currently on the books, and healthcare is explicitly one of its covered domains. (SB24-205) The law applies to "high-risk artificial intelligence systems" — systems that make or substantially factor into "consequential decisions." The statute defines healthcare services decisions as consequential. That definition is broad enough to cover clinical decision support tools, patient triage algorithms, prior authorization AI, and care management systems.
If your organization deploys AI in any of those contexts in Colorado, SB 24-205 requires you to do the following — and HIPAA requires none of it.
Impact assessment. Before deploying a high-risk AI system, and annually thereafter, deployers must complete a documented impact assessment evaluating the risk of algorithmic discrimination. The assessment must examine what data the system uses, what decisions it influences, what protected classes could be affected, and what mitigation steps are in place. (SB24-205) Our Colorado SB 24-205 compliance package includes impact assessment templates built around the statute's requirements.
Consumer notification. Before or at the time a high-risk AI system makes or substantially factors into a consequential decision about a patient, you must notify that patient. (SB24-205) This is a process and documentation requirement that has to be built into the workflow, not just disclosed in a privacy notice.
Right to correct and right to appeal. Patients must have the opportunity to correct any inaccurate personal data the AI system relied on. If the AI contributes to an adverse decision, patients must have a meaningful path to appeal — with human review, if technically feasible. (SB24-205)
Algorithmic discrimination prevention. The law's central obligation is to take reasonable care to protect consumers from algorithmic discrimination — differential treatment or impact based on race, color, ethnicity, sex, religion, national origin, age, disability, and other protected characteristics. (SB24-205) Documenting that you've evaluated this risk, and what you've done about it, is a core compliance obligation.
Disclosure to the Attorney General. If you discover that a deployed AI system has caused or is reasonably likely to have caused algorithmic discrimination, you must report it to the Colorado AG within 90 days. (SB24-205)
The effective date for SB 24-205 is June 30, 2026. The date was originally February 1, 2026, but was extended to June 30 during a special legislative session in August 2025 when the legislature passed SB 25B-004. Colorado is first, but nine states enacted health-related AI legislation in 2024 alone. (NCSL) This is a trend, not an outlier.
What Federal Agencies Are Doing
State legislation is one front. Federal agencies are another — and in healthcare, several of them have moved simultaneously.
CMS and Medicare Advantage
The Centers for Medicare and Medicaid Services finalized the 2024 Medicare Advantage and Part D rule (CMS-4201-F), which addresses the use of algorithms in coverage decisions directly. (CMS-4201-F) The rule requires MA plans to base coverage decisions on the circumstances of the individual enrollee. An algorithm cannot serve as a blanket basis for a coverage denial — it must be applied in a way that accounts for individual patient circumstances.
This is a concrete operational requirement. If your MA plan or utilization management program uses AI to make or inform prior authorization decisions, the process must include individual-circumstances review. The documentation that demonstrates compliance with that requirement is not the same documentation that demonstrates HIPAA compliance.
HHS OCR and Section 1557
HHS OCR proposed rulemaking under Section 1557 of the Affordable Care Act targeting algorithmic discrimination in healthcare. (HHS Testimony, Dec. 2023) Section 1557 prohibits discrimination in federally funded health programs on the basis of race, color, national origin, sex, age, and disability. The proposed rule would explicitly address clinical algorithms used in decision-making — tools that screen patients, stratify risk, or guide care recommendations.
The scope of this rule, if finalized, would reach clinical decision support software, risk stratification tools, and AI systems that influence care recommendations across any healthcare organization receiving federal funding. That is most of the healthcare sector.
ONC and Transparency for Clinical Decision Support
The Office of the National Coordinator for Health Information Technology (ONC) finalized the Health Data, Technology, and Interoperability (HTI-1) rule, which added transparency requirements for predictive decision support interventions — the category that covers AI-powered clinical tools. Healthcare organizations and their technology vendors now have transparency and documentation obligations for how these tools work and how they were developed. These requirements exist alongside, and separately from, HIPAA's security and privacy documentation requirements.
FDA and AI-Enabled Medical Devices
The FDA's Predetermined Change Control Plan (PCCP) guidance governs how AI-enabled medical devices can update their algorithms after initial authorization. If your organization uses FDA-authorized AI medical devices — which is increasingly common in radiology, pathology, and cardiology — your compliance obligations include understanding the authorized scope of those devices and how changes to them are managed. HIPAA tells you how to protect the data those devices handle. FDA guidance governs the devices themselves.
The Gap
Here is the practical difference, stated plainly.
HIPAA requires you to conduct a security risk analysis for electronic protected health information. It requires you to sign BAAs with vendors who handle that information. It requires breach notification when PHI is improperly accessed or disclosed. If you've done all of that for your AI systems, you've done what HIPAA requires. The article on AI and HIPAA on this site covers exactly what that looks like.
What HIPAA does not require:
- A bias audit or impact assessment examining whether your AI system produces discriminatory outcomes
- Consumer notification before an AI-assisted consequential decision is made
- A documented process for patients to appeal AI-assisted decisions
- A public statement summarizing what high-risk AI systems you deploy and how you manage their risks
- Annual review of whether deployed AI systems are causing algorithmic discrimination
- Reporting to a state AG when algorithmic discrimination is discovered
Those are different obligations entirely. They come from different statutes, enforced by different agencies, with different penalty structures. Satisfying one set does not satisfy the other.
The healthcare organization that has a complete HIPAA compliance program but no impact assessment, no bias audit, and no consumer notification process for AI-assisted decisions is compliant with the 1996 law. It is not compliant with what the regulatory landscape looks like in 2026.
What Documents You Need
If you use AI in healthcare in Colorado, or in other states with similar laws, the compliance infrastructure you need goes beyond your existing HIPAA documentation.
For Colorado SB 24-205 specifically: You need impact assessments for each high-risk AI system you deploy, a risk management policy and program, consumer notification procedures, and a process for handling correction requests and appeals. Our Colorado SB 24-205 compliance package includes the templates and documentation structure for deployers who need to build this program before the June 30, 2026 effective date.
For healthcare AI broadly: The combination of HIPAA obligations and the newer layer of AI-specific requirements means healthcare organizations need a compliance program that covers both data protection and algorithmic risk management. Our Healthcare AI Compliance package builds on your existing HIPAA program with the additional documentation that bias audits, impact assessments, and algorithmic transparency requirements demand.
For bias audit documentation: If you need to evaluate and document the discrimination risk of a specific AI system — whether for Colorado, for a CMS audit, or for internal governance — our AI Bias Audit Template gives you a structured framework for conducting and documenting that evaluation.
If you're not sure which of these laws apply to your organization, the compliance self-assessment is a practical starting point.
Where to Start
The first step is separating two questions that healthcare organizations often conflate: "Are we HIPAA-compliant for this AI system?" and "Are we AI-compliant for this AI system?" The first question is about data protection. The second is about algorithmic risk, bias, transparency, and patient rights. Both matter. They require different documentation and different processes.
Audit your AI inventory against the new obligation set. For each AI system you deploy that influences clinical decisions, coverage decisions, or patient access to services: Do you have an impact assessment? Do you have a consumer notification process? Do you have a documented bias evaluation? Do you have an appeals process? If any answer is no, that's a gap — not a HIPAA gap, but a gap under the laws that have been built on top of HIPAA.
If you're in Colorado or a state with active AI legislation, prioritize the state law timeline. Colorado's June 30, 2026 effective date is the most immediate hard deadline. Impact assessments and risk management programs take time to build. If you haven't started, that work begins now.
Don't wait for federal rules to finalize. The CMS MA rule is already in effect. The HHS OCR Section 1557 rulemaking is active. The trajectory is clear even where final rules are still pending. Organizations that start building the documentation infrastructure now will be in a stronger position when final rules land than those waiting to see the finished product before moving.
HIPAA is the floor. It is no longer the ceiling.
Sources — Every regulatory and legislative citation in this article was verified against primary sources at .gov URLs:
- Colorado General Assembly — SB24-205 Consumer Protections for Artificial Intelligence — Consequential decision definition, deployer obligations (impact assessment, consumer notice, right to appeal, AG disclosure), effective date.
- Colorado General Assembly — SB25B-004 Increase Transparency for Algorithmic Systems — Extends SB 24-205 effective date to June 30, 2026.
- FDA — AI/ML-Enabled Medical Devices — 1,430 authorized devices as of March 4, 2026.
- CMS — 2024 Medicare Advantage and Part D Final Rule (CMS-4201-F) — Individual circumstances requirement for coverage decisions.
- HHS — Testimony on Artificial Intelligence, December 2023 — Section 1557 rulemaking and algorithmic discrimination in clinical decision-making.
- NCSL — Artificial Intelligence 2024 Legislation — Nine states enacted health-related AI legislation in 2024.
- FDA — Artificial Intelligence as a Medical Device (SaMD) — PCCP guidance for post-market AI algorithm changes.
Mind the Gap: Why HIPAA Alone Can't Govern Healthcare AI
4 facts
- [1]FDA — Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices (opens in new tab)
- [2]Colorado General Assembly — SB24-205: Consumer Protections for Artificial Intelligence (opens in new tab)
- [3]HHS — Testimony on Artificial Intelligence, December 2023 (opens in new tab)
- [4]CMS — 2024 Medicare Advantage and Part D Final Rule (CMS-4201-F) (opens in new tab)
- [5]NCSL — Artificial Intelligence 2024 Legislation (opens in new tab)
- [6]FDA — Artificial Intelligence as a Medical Device (SaMD) (opens in new tab)
Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.
More from the blog
AI governance framework checklist: what every enacted state law actually requires
The EEOC Quietly Deleted Its AI Guidance. Every Employer Using AI in Hiring Needs to Know.
Get your compliance documentation done
Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.
Browse Compliance Packages