Most healthcare practice administrators assume that being HIPAA-compliant means the practice is also AI-compliant. Eighteen months ago, that assumption was mostly defensible. In 2026, it is structurally wrong — and the gap between HIPAA compliance and AI-specific governance is the single most common finding in healthcare AI Risk Sprint engagements.
This post is for healthcare practice administrators, medical group COOs, healthcare IT directors, and compliance officers who have done the HIPAA work and want to understand what additional governance the 2026 environment now expects. It assumes baseline familiarity with HIPAA — Privacy Rule, Security Rule, breach notification — and focuses on the AI-specific layer that sits on top.
The audience is small to mid-sized practices and groups (50–500 employees). Larger health systems have different operational scale and typically already have CISO-led governance programs in flight; the conversation there is different.
Where HIPAA stops
HIPAA imposes specific obligations on covered entities and business associates with respect to PHI. The Privacy Rule governs uses and disclosures. The Security Rule mandates administrative, physical, and technical safeguards. The Breach Notification Rule requires reporting unauthorized acquisitions of unsecured PHI.
What HIPAA does well: it sets the floor for handling PHI within established categories of activity. Provider documentation, payor coordination, internal administration, business associate relationships — all governed.
What HIPAA does less well in the AI context: the framework was designed in the 1990s for a clinical-records-and-payor-communications world. It addresses AI use only by extension: AI is "software" in the Security Rule's vocabulary, AI vendors are "business associates" if they process PHI on behalf of a covered entity, and AI-generated PHI is "PHI" subject to the same rules as any other PHI. These extensions are correct but underspecified for the operational decisions that practices face in 2026.
The specific gaps surface in five places: BAA scope, Security Rule application, breach notification edge cases, payor BAA reviews, and Joint Commission survey patterns.
Gap 1 — BAA scope for AI vendors
A 2018-era BAA between a practice and a vendor (EHR, scheduling, billing) typically defines the vendor's permitted uses of PHI in language that predates generative AI. When the vendor adds AI features in 2024 or 2025 — which most have — the BAA may not address whether those AI features are within scope.
The specific questions that surface:
- Are the vendor's AI features processing PHI through the same infrastructure that processes non-AI PHI, or routing through a separate AI subprocessor?
- If a separate subprocessor (OpenAI, Anthropic, Google, internal proprietary model), is that subprocessor a business associate? Most major vendors have moved subprocessors onto downstream BAAs, but verification is required.
- Does the vendor's AI training program use the practice's PHI? Most vendor BAAs now explicitly prohibit this; older BAAs are silent.
- Does the vendor's AI feature retain raw audio, transcripts, or generated content beyond the immediate request? Retention periods for AI-generated artifacts are often longer than the practice would assume.
The practical answer is that most major healthcare-vendor AI features are operating under appropriate downstream BAAs and explicit no-training-on-PHI commitments. The verification problem is that practices have to ask, vendor by vendor, in writing, and retain the responses. The AI-specific BAA review is a 30-minute-per-vendor exercise that has not historically been part of the practice's annual BAA refresh cycle.
Gap 2 — Security Rule application to AI tool use
The HIPAA Security Rule's administrative, physical, and technical safeguards apply to AI tools that process PHI, but the safeguards were drafted with traditional software architectures in mind. The application to generative AI raises specific questions:
Access controls (§ 164.312(a)(1)). Standard access control applies to who can use the AI tool, but the more subtle question is whether the tool's outputs are also under access control. An AI scribe that generates draft documentation accessible to a clinician's entire team raises questions about whether the draft is appropriately access-controlled before clinician finalization.
Audit controls (§ 164.312(b)). Standard audit logging applies to AI tool access and use, but practices often don't realize that the AI vendor's audit logs may be separate from the EHR's audit logs. Comprehensive incident response requires both.
Integrity controls (§ 164.312(c)). AI-generated documentation has integrity considerations that traditional dictation does not: hallucination, output truncation, model-specific systematic errors. The Security Rule does not specifically address these, but practices should have an internal policy for clinician review of AI-generated documentation before chart entry.
Transmission security (§ 164.312(e)). AI tool API calls are transmissions of PHI. Standard transmission security applies, but practices should verify that AI vendor API endpoints are TLS-encrypted, that authentication is appropriately credential-managed, and that any browser-based AI tool access is on managed endpoints (not personal devices).
OCR has not yet issued AI-specific Security Rule guidance, but the existing standards apply by extension. The 2026 expectation is that practices document how they have addressed each safeguard category in the AI context.
Gap 3 — Breach notification edge cases
The Breach Notification Rule defines a "breach" as the unauthorized acquisition, access, use, or disclosure of unsecured PHI. AI introduces several edge cases:
Employee paste of PHI into consumer AI. A clinician pastes patient information into ChatGPT, Claude, or Gemini consumer accounts to draft a letter or query a clinical question. The PHI has been disclosed to the AI vendor without authorization. Under standard interpretation, this is a breach — but the practice may not discover it for weeks or months without telemetry. The 60-day notification clock starts on discovery, not on occurrence.
AI vendor breaches. If an AI subprocessor experiences a security incident affecting PHI, the breach notification flows up to the practice as the covered entity. The practice's notification obligations apply on the same 60-day clock from discovery. The 2024–25 OpenAI security incidents (token leaks, conversation history exposures) raised this question; most major AI vendors now have explicit breach notification provisions in their BAAs.
Inadvertent training data exposure. Less common but increasingly considered: if a vendor's AI model is fine-tuned on the practice's data and subsequently reveals patient-identifying information in another customer's output, that is a breach. This is the scenario that the no-training-on-PHI vendor commitments are designed to prevent — but verification of compliance is a practice's responsibility.
The practical implication: practices need to extend their existing breach assessment process to include AI-specific scenarios. The HIPAA Security Officer should review the AI tool inventory regularly and have a documented procedure for AI-related incident response.
Gap 4 — Payor BAA review questions
Payors are starting to include AI-specific questions in BAA review cycles, particularly Medicare Advantage plans and large commercial payors. Common patterns:
- "List all AI tools used in administrative or clinical workflows that process member information."
- "Describe your AI tool governance program, including policy, training, and vendor management."
- "Provide evidence of AI-related incident response procedures and any incidents in the past 24 months."
- "Confirm that all AI vendors processing member PHI are operating under executed BAAs."
Practices that cannot answer these questions in writing experience delayed BAA renewals, additional questionnaire iterations, and in some cases pause of new member capitation arrangements pending documentation. The dollar impact is rarely large in isolation, but the operational disruption can be material — particularly for practices that depend on a single major payor for a substantial portion of their patient volume.
Payor reviews are predictable on an annual or biennial cycle. The right time to prepare the AI governance documentation is in the quarter before the payor review window opens, not in the 30 days before the deadline.
Gap 5 — Joint Commission survey patterns
For practices subject to Joint Commission accreditation, AI is increasingly appearing in survey scope. The Information Management chapter (IM) and Performance Improvement chapter (PI) standards both have AI touchpoints in their 2025–26 application.
IM.02.01.01 — managing information needs. Survey teams have begun asking about AI tool inventories and whether AI tools are managed under the practice's information governance program.
IM.02.02.01 — protecting privacy of health information. AI-related privacy controls are an explicit survey topic, particularly around employee training, AUP coverage, and breach response.
PI.03.01.01 — improving performance. AI-driven clinical decision support tools, when used, are increasingly part of the practice's performance improvement scope. Survey teams ask about how the AI's recommendations are reviewed, how disagreements between AI output and clinician judgment are documented, and how the AI tool's performance is monitored over time.
Joint Commission surveys are episodic — every three years for most practices — but the documentation expectation is that AI governance has been in place throughout the survey period. Building it the month before the survey is too late to demonstrate program maturity.
What healthcare-specific AI governance looks like
The five gaps above add up to a specific governance program shape for healthcare practices:
- AI tool inventory that distinguishes PHI-processing tools from administrative tools and tracks each tool's BAA / DPA status and AI subprocessor chain
- AI Acceptable Use Policy with healthcare-specific provisions (clinical documentation AI, scribe tools, patient-facing AI, billing AI, scheduling AI, marketing AI) and explicit prohibition on consumer AI for any PHI context
- BAA refresh program that explicitly reviews AI scope for each vendor relationship on an annual cadence
- Security Officer-led AI incident response procedure that addresses both the inadvertent-paste scenario and vendor breach scenarios with documented playbooks
- Training program with role-segmented modules: clinical staff, administrative staff, billing staff, support staff — and a separate module for the Security Officer / Compliance Officer on AI-specific incident assessment
- Annual governance review documented for board and Joint Commission survey purposes
This is more than the practice's existing HIPAA program covers, but it is not a fundamentally different program. It is a specific layer that sits on top of HIPAA — one that uses the same Security Officer, the same training infrastructure, the same incident response process, and the same documentation cadence. The operational burden is roughly 20–40% above baseline HIPAA work, concentrated in the first 90 days of program standup.
What to do next
For healthcare practices that have not yet built the AI governance layer above HIPAA, the next 90 days are the right window. Payor BAA cycles, Joint Commission survey timing, and OCR enforcement priorities are all moving in the same direction.
The free AI Risk Assessment includes a healthcare-specific scoring pass — the routing logic recognizes regulated-industry signals and surfaces tailored recommendations. For practices that need the full documentation set, the AI Risk Sprint produces an inventory, AUP, BAA gap analysis, training plan, and roadmap with healthcare-specific framing built in. Two weeks, $5,500.
HIPAA compliance was hard-won over the past 25 years. AI-specific governance is the 2026 layer that sits on top. Building it now — before the next payor review, the next survey, or the next OCR conversation — is materially cheaper than building it after.




