Skip to main content
Back to Blog
AI and HIPAA: What Healthcare Businesses Must Do Now
HIPAAhealthcarePHISecurity RulePrivacy Rule

AI and HIPAA: What Healthcare Businesses Must Do Now

AI Compliance Documents Team22 min read

Two-Sentence Summary

HIPAA is the federal law that protects patient health information, and it applies fully to AI systems used in healthcare — from diagnostic tools to clinical documentation assistants. This article explains what HIPAA's Privacy Rule, Security Rule, and Breach Notification Rule require when healthcare organizations deploy AI, what documentation is mandatory, and what the penalties look like when organizations get it wrong.

The question isn't whether HIPAA applies to AI in healthcare. It does. The question is whether your business has done the work to implement HIPAA's requirements for the AI systems you've added to your operations — and whether the agreements, policies, and safeguards you built years ago for traditional software have been updated to cover the new tools.

HIPAA was enacted in 1996. Its Privacy Rule, Security Rule, and Breach Notification Rule were developed before machine learning existed as a commercial technology. But those rules are technology-neutral by design. They define what protections must exist for protected health information (PHI), not how those protections must be implemented. When an AI system processes PHI, the same rules apply — with the same penalties for non-compliance.

This article explains what each major HIPAA rule requires in the context of AI deployments, what specific documentation is mandatory, where the most common compliance gaps are when healthcare organizations add AI tools, and what the penalties look like. Every citation is to the actual regulatory text.

What HIPAA Covers and Why It Applies to AI

HIPAA applies to covered entities — healthcare providers, health plans, and healthcare clearinghouses — and to their business associates. (45 C.F.R. § 160.103)

A covered entity includes essentially every hospital, clinic, physician practice, pharmacy, health plan, and insurer operating in the US healthcare system. If you provide healthcare services or administer health insurance and you transmit health information electronically in connection with a standard transaction, you're almost certainly a covered entity. (45 C.F.R. § 160.102)

A business associate is any person or entity that creates, receives, maintains, or transmits PHI on behalf of a covered entity in the course of performing services for that covered entity. This is the definition that pulls AI vendors directly into HIPAA's scope. (45 C.F.R. § 160.103)

If you are a healthcare provider deploying an AI tool to help with clinical documentation, patient triage, prior authorization review, diagnostic imaging analysis, or any other function that involves patient records — and if that tool is operated by a third-party vendor — that vendor is your business associate. HIPAA requires a signed Business Associate Agreement (BAA) before any PHI flows to them.

If you are an AI company providing tools to healthcare organizations and those tools process PHI, you are a business associate. You must sign BAAs, and you must maintain your own HIPAA compliance program. HIPAA liability runs to business associates directly — not just to the covered entity that hired you.

Protected Health Information (PHI) is individually identifiable health information that is created, received, maintained, or transmitted by a covered entity or business associate. (45 C.F.R. § 160.103) The 18 identifiers that make information individually identifiable are listed in the Privacy Rule's de-identification standard at 45 C.F.R. § 164.514.

Any AI system that ingests, processes, outputs, or stores data containing any of those 18 identifiers in a healthcare context is processing PHI and is subject to HIPAA.

The HIPAA Privacy Rule and AI

The Privacy Rule, codified at 45 C.F.R. Part 164, Subpart E, establishes national standards for the use and disclosure of PHI. The core principle is that PHI may only be used or disclosed in ways explicitly permitted or required by the rule.

For AI systems, the Privacy Rule creates several important obligations.

Minimum Necessary Standard

When using or disclosing PHI, covered entities must make reasonable efforts to limit PHI to the minimum necessary to accomplish the intended purpose. (45 C.F.R. § 164.502(b))

Applied to AI: if you are training, fine-tuning, or deploying an AI model on patient data, the minimum necessary standard requires you to evaluate whether the AI actually needs the full scope of PHI you're feeding it. An AI model that helps predict patient readmission risk may need certain clinical data fields — but does it need patients' names, complete addresses, and full social security numbers? Probably not. If you are providing a de-identified dataset that retains only the data elements the model actually uses, and that dataset has been properly de-identified under the Privacy Rule's standards, HIPAA's restrictions on use and disclosure don't apply.

De-Identification

De-identified health information is not PHI and is not subject to HIPAA's use and disclosure restrictions. The Privacy Rule provides two methods for achieving de-identification: (45 C.F.R. § 164.514(a)–(b))

Safe Harbor method: Remove all 18 specified identifiers, and the covered entity has no actual knowledge that the remaining information could identify an individual.

Expert Determination method: A qualified statistician applies generally accepted principles and certifies that the risk of identification is very small.

For AI training and development purposes, de-identification is one of the most important tools available. If you can develop, train, or test an AI model on properly de-identified data, you avoid many of the consent, authorization, and use limitation issues that arise with PHI. The catch: de-identification done improperly doesn't create de-identified data — it creates a HIPAA violation. Re-identification risks are real, and healthcare AI companies that claim to use "de-identified" data without rigorous methodology are exposed.

Authorizations and Treatment/Payment/Operations Exceptions

PHI may be used without individual authorization for treatment, payment, and healthcare operations. (45 C.F.R. § 164.506) This is the exception that enables clinical AI tools to function — a physician can use an AI diagnostic tool on a patient's data because that's treatment. A health plan can use AI for claims review because that's payment operations.

But the exception has limits. Using PHI to train AI systems that will be deployed for other entities, sold as products, or used for purposes beyond the specific treatment or operations context from which the data was drawn is generally not covered by the treatment/payment/operations exception. That typically requires either individual authorization or a research authorization with appropriate safeguards. (45 C.F.R. § 164.508; 45 C.F.R. § 164.512(i))

The FTC and HHS have both taken enforcement positions on this in recent years. Healthcare organizations using patient data to train AI products are in a different legal position than healthcare organizations using AI tools for direct patient care.

Notice of Privacy Practices

Covered entities must provide individuals with a Notice of Privacy Practices that describes how PHI is used and disclosed. (45 C.F.R. § 164.520) If your organization uses AI in ways that affect how PHI is used — and if those uses are not already adequately described in your current Notice — your Notice may need to be updated. This is a frequently overlooked step when new AI tools are deployed.

The HIPAA Security Rule and AI

The Security Rule, codified at 45 C.F.R. Part 164, Subpart C, requires covered entities and business associates to implement administrative, physical, and technical safeguards to protect the confidentiality, integrity, and availability of electronic PHI (ePHI). (45 C.F.R. § 164.306)

AI systems that process ePHI must be covered by Security Rule safeguards. Here's what that means in practice.

Administrative Safeguards

Administrative safeguards are the policies and procedures that manage how ePHI is handled. (45 C.F.R. § 164.308)

Risk analysis and risk management. Every covered entity and business associate must conduct an accurate and thorough assessment of the potential risks and vulnerabilities to ePHI, and implement security measures sufficient to reduce those risks to a reasonable and appropriate level. This is the foundational Security Rule requirement, and it's the one most often cited in HHS enforcement actions.

For AI systems, a Security Rule-compliant risk analysis must cover the new attack surfaces and vulnerabilities that AI introduces. These include: prompt injection attacks (adversarial inputs designed to make an AI system disclose, modify, or misuse PHI); model inversion attacks (extracting training data from a model's outputs); supply chain risks (vulnerabilities in the foundation model, the API, or third-party components); and AI-specific failure modes like hallucination that can affect the accuracy of PHI-dependent outputs.

Workforce training and management. Staff who interact with AI systems processing PHI must receive training on HIPAA requirements as they apply to those systems. If your clinical staff is now using an AI documentation tool, a diagnostic AI, or an AI patient communication platform, your workforce training needs to cover how to use those tools consistently with HIPAA.

Contingency planning. Covered entities must have contingency plans for ePHI when AI systems fail or become unavailable. If a physician practice is relying on an AI tool for clinical documentation and the tool goes down, what's the manual backup? The plan must exist in writing. (45 C.F.R. § 164.308(a)(7))

Technical Safeguards

Technical safeguards are the technology-based controls that protect ePHI. (45 C.F.R. § 164.312)

Access control. Systems containing ePHI must implement technical policies and procedures that allow only authorized persons or software programs to access ePHI. For AI systems, this means access controls on the AI platform itself, on the data pipelines feeding the AI, and on the outputs the AI produces.

Audit controls. Hardware, software, and procedural mechanisms must record and examine activity in information systems that contain ePHI. AI systems must generate audit logs that allow forensic examination of what data was accessed, by what process, at what time.

Integrity controls. Measures must be implemented to ensure ePHI is not improperly altered or destroyed. For AI outputs that feed into clinical workflows — diagnostic suggestions, medication recommendations, documentation — there must be controls to ensure the data in the record reflects accurate clinical information and that AI outputs haven't corrupted the record.

Transmission security. ePHI must be protected against unauthorized access when transmitted. This applies to ePHI transmitted to and from AI systems — API calls, model inputs, model outputs — which must be encrypted in transit.

Physical Safeguards

Physical safeguards govern physical access to the facilities and devices containing ePHI. (45 C.F.R. § 164.310) For AI systems hosted on-premises, physical safeguards apply directly to the hardware. For cloud-hosted AI systems, the physical safeguard obligations pass to the vendor through the Business Associate Agreement, but the covered entity must verify through the BAA that appropriate controls exist.

Business Associate Agreements for AI Vendors

A Business Associate Agreement is a contract required by HIPAA between a covered entity and any vendor that creates, receives, maintains, or transmits PHI on the covered entity's behalf. (45 C.F.R. § 164.504(e))

If you are a healthcare organization deploying an AI tool that processes patient data, and that tool is operated by a third-party vendor, you need a BAA with that vendor before any PHI flows to their systems. If PHI has already flowed to a vendor without a BAA in place, that is an ongoing HIPAA violation — not a historical one.

The BAA must include specific terms required by the regulation: what the business associate may and may not do with the PHI, what safeguards the business associate will implement, how the business associate will handle breaches, and other provisions. HHS provides model BAA language as guidance. (HHS — Business Associate Contracts)

Several specific AI vendor BAA issues deserve attention:

Model training on customer data. Some AI vendors use customer data to train or improve their models. If that data includes PHI, and if the vendor's use of that data for training is not covered by the purposes specified in the BAA, that use may be a HIPAA violation by the vendor — and potentially a breach that the vendor must report to you. BAAs with AI vendors should explicitly address whether the vendor may use PHI for model training and under what conditions.

Subcontractors. If an AI vendor uses subcontractors — cloud providers, data processing services, model fine-tuning partners — those subcontractors who touch PHI are also business associates and must sign BAAs with the covered entity or the upstream business associate. HIPAA's business associate framework flows through subcontracting chains.

Data retention and deletion. What happens to PHI when you stop using an AI vendor's service? The BAA must address return or destruction of PHI at the end of the business relationship. For AI vendors, this is complex — PHI used in training runs may persist in model weights in ways that can't be easily "deleted."

The Breach Notification Rule and AI

The Breach Notification Rule, codified at 45 C.F.R. Part 164, Subpart D, requires covered entities to notify affected individuals, HHS, and in some cases the media when a breach of unsecured PHI occurs. (45 C.F.R. § 164.400)

A breach is the acquisition, access, use, or disclosure of PHI in a manner not permitted by the Privacy Rule that compromises the security or privacy of the PHI. There is a presumption that an impermissible use or disclosure is a breach unless the covered entity can demonstrate a low probability that the PHI was compromised based on a risk assessment.

Timing requirements:

  • Notice to individuals must be provided within 60 days of discovery of the breach. (45 C.F.R. § 164.404)
  • Notice to HHS for small breaches (fewer than 500 individuals) must be submitted annually, within 60 days of the end of the calendar year. (45 C.F.R. § 164.408)
  • Notice to HHS for large breaches (500 or more individuals) must be submitted within 60 days of discovery. (45 C.F.R. § 164.408)
  • Notice to prominent media outlets is required for breaches affecting 500 or more individuals in a state or jurisdiction. (45 C.F.R. § 164.406)

AI-specific breach scenarios: AI systems create breach risks that traditional software doesn't. A prompt injection attack that causes an AI to output another patient's clinical data is a breach. An AI system configured to generate clinical summaries that incorrectly combines data from two patients' records and shares it with a physician is a potential breach. An AI vendor that exposes training data containing PHI through a model inversion attack is a breach at the vendor level — requiring the vendor to notify you, and then you notifying affected individuals.

Your incident response procedures must address these AI-specific scenarios. A breach response plan built in 2019 for traditional EHR and database breaches may not map cleanly onto AI failure modes.

What Documentation HIPAA Requires

HIPAA is a documentation-heavy regulation. The Security Rule explicitly requires written policies and procedures, and it requires covered entities to maintain documentation for a minimum of six years from the date of creation or the date it was last in effect, whichever is later. (45 C.F.R. § 164.316)

For AI systems processing PHI, the documentation that must exist includes:

Risk analysis documentation. Written records of the risk analysis, including identified risks and vulnerabilities, probability and criticality assessments, and the security measures implemented to address them. The risk analysis must be updated when there are significant changes to the environment — and adding a new AI system qualifies as a significant change.

Risk management plan. Documentation of the security measures implemented in response to the risk analysis, and the procedures for regularly reviewing those measures.

Workforce training records. Documentation that required HIPAA training has been provided to all workforce members who have access to ePHI, including training on AI system use.

BAAs. Signed BAAs with all business associates, including AI vendors. These must be maintained for six years.

Policies and procedures. Written policies and procedures covering all Security Rule safeguards, Privacy Rule requirements, and Breach Notification Rule obligations. When those policies are revised or retired, the old versions must be retained for six years.

Incident log. Documentation of security incidents — including those involving AI systems — and the response taken.

Sanction policy. Documentation of consequences for workforce members who violate HIPAA policies.

HIPAA Penalties

HIPAA penalties are administered by the HHS Office for Civil Rights (OCR). The penalty structure has four tiers based on culpability. (45 C.F.R. § 160.404; HHS Civil Money Penalties)

| Tier | Description | Penalty Per Violation | Annual Maximum | |------|-------------|----------------------|----------------| | Tier 1 | Did not know and with reasonable diligence would not have known | $100–$50,000 | $25,000 | | Tier 2 | Reasonable cause, not willful neglect | $1,000–$50,000 | $100,000 | | Tier 3 | Willful neglect, corrected within 30 days of discovery | $10,000–$50,000 | $250,000 | | Tier 4 | Willful neglect, not corrected within 30 days | $50,000 | $1,500,000 |

These are per-violation amounts. The regulation defines a "violation" as a failure to comply with a provision of HIPAA. When a single act violates multiple provisions — say, failing to have a BAA, failing to conduct a risk analysis, and failing to implement encryption all at once — each violation can produce a separate penalty. The annual cap of $1.5 million applies per violation category per year.

In addition to civil money penalties, HIPAA allows for criminal prosecution through the Department of Justice for knowing violations. Criminal penalties range from $50,000 to $250,000 in fines and one to ten years in prison depending on culpability. (42 U.S.C. § 1320d-6)

Recent enforcement trends: OCR has actively investigated breaches resulting from inadequate risk analysis, failure to maintain BAAs, and failure to implement encryption. Settlements in major enforcement actions have ranged from hundreds of thousands to tens of millions of dollars. AI is a new enough domain that AI-specific HIPAA enforcement actions are still relatively rare — but OCR has made clear that HIPAA applies fully to AI, and the enforcement infrastructure is in place.

Where to Start

If your organization uses AI systems that process patient data, and you haven't recently reviewed your HIPAA compliance posture specifically in the context of those AI systems, these are the steps that matter most.

Step 1: Inventory your AI tools. List every AI system in your organization's technology stack that could interact with patient information. Include tools used by clinical staff, administrative staff, and billing. Ask vendors whether their products process or have access to PHI.

Step 2: Confirm BAAs are in place. For every AI vendor that handles PHI on your behalf, verify that a signed BAA exists, that it covers the current scope of the vendor's access, and that it includes appropriate provisions on model training, subcontractors, and data deletion.

Step 3: Update your risk analysis. Your Security Rule risk analysis must reflect current conditions. If you've added AI tools since your last analysis, the analysis is outdated. Update it to include AI-specific risks: prompt injection, model inversion, AI supply chain risks, and AI hallucination in clinical contexts.

Step 4: Review your policies and procedures. Ensure written policies cover AI use cases — who can use AI tools, what data they can input, how outputs should be reviewed, and what to do when an AI system behaves unexpectedly.

Step 5: Check your workforce training. If staff are using AI tools that access PHI, they need training on how to use them in HIPAA-compliant ways. That training needs to be documented.

Step 6: Verify your breach response plan covers AI scenarios. Walk through a hypothetical where an AI vendor experiences a breach. Does your incident response plan tell you what to do? Does it specify who's responsible, how you assess scope, and how you meet the 60-day notification requirement?

HIPAA compliance for AI is not a separate compliance program from your existing HIPAA program. It's an extension of it. The same rules apply. The documentation requirements are the same. The safeguard categories are the same. What's new is the obligation to evaluate whether your existing program covers AI-specific risks — and the honest answer, for most organizations, is that it needs updating.

Our Healthcare AI Compliance package includes the core documentation templates — risk analysis, BAA provisions for AI vendors, workforce training records, and incident response procedures — built specifically for healthcare organizations deploying AI systems.


Sources — Every regulatory citation in this article is drawn from enacted regulations at the Code of Federal Regulations (eCFR) and HHS official guidance:

Disclaimer: This article is for general informational purposes only and does not constitute legal advice. HIPAA compliance requirements depend on your specific business type, the nature of your AI systems, and the PHI involved. Consult qualified healthcare compliance counsel and your HIPAA privacy and security officers for guidance specific to your organization.

What Is a Business Associate Agreement, and Why AI Vendors Need One
Imagine you run a restaurant. You handle your customers' credit card numbers every day. One day, you hire a company to install a new point-of-sale system. To set it up, that company needs access to your transaction records — which include your customers' card numbers. You didn't hand those card numbers to a stranger on the street; you gave them to a vendor performing a specific service. But if that vendor gets hacked and your customers' card numbers leak, you're both on the hook — because you chose that vendor and let them handle sensitive information without making sure they'd protect it. In HIPAA terms, a Business Associate Agreement (BAA) is the contract that formalizes this relationship. When a healthcare provider gives a vendor access to patient information — Protected Health Information, or PHI — the law requires a signed contract that spells out exactly what the vendor can and cannot do with that data, what security measures they'll maintain, and what happens if there's a breach. Without a signed BAA in place before any PHI flows, the healthcare provider is violating HIPAA — and it's an ongoing violation, not a one-time mistake. AI vendors make this more complicated than traditional software vendors. A standard electronic health records company stores and displays patient data, and the BAA covers that storage and display. But an AI vendor might use patient data to train their model, improve their algorithms, or feed it into a system that serves other customers. If the BAA doesn't explicitly address whether the vendor can use PHI for model training — and under what conditions — that training could itself be a HIPAA violation by the vendor and a reportable breach. This is why healthcare organizations deploying AI tools need to treat BAAs as living documents, not boilerplate contracts. The traditional BAA template that worked for your EHR vendor five years ago almost certainly doesn't cover AI-specific scenarios like model training on patient data, re-identification risks from model outputs, or the question of what happens to PHI embedded in model weights when you terminate the vendor relationship. Updating your BAAs for AI is not optional — it's a core HIPAA requirement.
4 facts

Disclaimer: This article is for informational purposes only and does not constitute legal advice, legal representation, or an attorney-client relationship. Laws and regulations change frequently. You should consult a licensed attorney to verify that the information in this article is current, complete, and applicable to your specific situation before relying on it. AI Compliance Documents is not a law firm and does not practice law.

More from the blog

Get your compliance documentation done

Stop reading, start complying. Our packages generate the documents you need based on the actual statutes.

Browse Compliance Packages