}

AI for Doctors — HIPAA-Safe Tools and What to Avoid in 2026

AI for Doctors — HIPAA-Safe Tools and What to Avoid in 2026

Physicians in the United States spend an average of 15.6 hours per week on administrative tasks, including documentation, prior authorizations, and inbox management, according to a 2023 study published in Health Affairs. That is nearly two full working days every week not spent with patients. Artificial intelligence tools are now capable of handling a significant portion of that burden — but using the wrong tool with patient data can result in a HIPAA violation that carries civil and criminal penalties.

This guide explains which AI tools are safe to use with protected health information (PHI), which are not, and how to get real clinical value from AI without putting your practice or your patients at risk.


The AI Opportunity in Medicine

The current generation of AI tools can meaningfully assist physicians in four areas:

Ambient documentation. Tools like Nuance DAX Copilot, Suki AI, and Nabla Copilot listen to a patient encounter in real time and generate a structured clinical note automatically. Physicians who use these tools report saving 2 to 3 hours per day on documentation. Nuance DAX Copilot, for example, integrates directly into Epic and Cerner and produces a note draft within seconds of the encounter ending.

Clinical decision support. AI-powered platforms such as Isabel DDx, Diagnostic Robotics, and features embedded in Epic and Cerner can surface differential diagnoses, flag drug interactions, and highlight evidence-based protocols based on the patient's chart data.

Radiology and pathology AI. FDA-cleared tools from companies like Viz.ai (for stroke detection), Aidoc, and Paige.ai (for pathology) analyze imaging and flag findings for radiologist review. These tools do not replace the reading physician — they act as a second set of eyes that works at machine speed.

Prior authorization automation. Platforms like Cohere Health, Olive (now part of Waystar), and features within Epic use AI to predict authorization outcomes, pre-populate forms, and handle routine approvals without physician involvement. A 2022 AMA survey found that 93% of physicians reported that prior auth delays patient care; automation directly addresses this problem.


HIPAA and AI: What Makes a Tool Compliant

HIPAA's Privacy and Security Rules apply to any technology that stores, processes, or transmits PHI. Before using any AI tool with real patient data, you need three things in place:

1. A signed Business Associate Agreement (BAA). If an AI vendor will have access to PHI — even briefly, as audio recorded during an encounter — they must sign a BAA. This is a legal contract under HIPAA that makes them responsible for protecting the data. If a vendor refuses to sign a BAA, you cannot legally use their product with PHI. Period.

2. Data encryption in transit and at rest. Any HIPAA-compliant AI tool must use AES-256 encryption for stored data and TLS 1.2 or higher for data in transit. Ask vendors for their SOC 2 Type II report if you want documented verification.

3. No training on your patient data. Some AI vendors use customer data to improve their models. This is explicitly not permitted under HIPAA for PHI. A compliant vendor will state in writing — in the BAA or a data processing addendum — that your patient data is not used for model training.


Tools with BAAs Available

These tools have formal HIPAA compliance programs and will sign BAAs for qualified healthcare customers:

  • Nuance DAX Copilot (Microsoft): Ambient clinical documentation, embedded in Epic/Cerner. BAA available. Enterprise pricing, typically $500–$700 per provider per month.
  • Microsoft Azure OpenAI Service: The enterprise API for GPT-4 and other OpenAI models, hosted on Azure. Microsoft signs a BAA for Azure customers under a covered entity agreement. This allows custom AI development on PHI with proper architecture. Pricing is consumption-based.
  • AWS HealthLake: Amazon's HIPAA-eligible service for storing, transforming, and querying health data in FHIR format. AWS signs a BAA for HealthLake. Useful for analytics and population health work, not a consumer-facing tool.
  • Google Cloud Healthcare API: Google's HIPAA-compliant API for working with FHIR, HL7v2, and DICOM data. Google signs a BAA for Google Cloud customers. Used for building custom healthcare AI applications.
  • Suki AI: Ambient documentation platform with BAA available. Integrates with Epic, Cerner, and Athenahealth. Pricing around $300–$500 per provider per month.
  • Nabla Copilot: Ambient documentation with BAA available. Supports 15+ languages. Pricing is subscription-based per clinician.

Tools That Are NOT HIPAA-Safe for PHI

The following tools, in their consumer and standard API forms, must not be used with PHI:

  • ChatGPT (free tier and Plus): OpenAI does not sign BAAs for consumer ChatGPT accounts. Data entered into ChatGPT may be used to improve OpenAI's models by default. Do not paste patient notes, include patient names, dates of birth, or any identifying information.
  • Claude (free tier and Pro): Anthropic does not sign BAAs for standard Claude accounts. Same restriction applies. (Note: Claude is available via AWS Bedrock and Azure, where BAA coverage may apply — check with your legal team.)
  • Google Gemini (consumer): The consumer version of Gemini at gemini.google.com does not carry a BAA. Gemini within Google Workspace may have different terms depending on your enterprise agreement — verify before use.
  • ChatGPT API (without enterprise agreement): The standard OpenAI API does not include a BAA. The ChatGPT Enterprise plan does include a BAA and data privacy protections.

When you CAN use these tools: If you completely de-identify the information — removing all 18 HIPAA identifiers including name, date of birth, geographic data smaller than state, dates, phone numbers, and more — you can use consumer AI tools for tasks like drafting educational content, exploring research questions, or practicing clinical reasoning with hypothetical cases.


Use Case 1: Ambient Documentation with Nuance DAX Copilot

Nuance DAX Copilot works as a mobile app or integrated widget inside your EHR. Here is a typical workflow:

Setup (one-time): 1. Your organization signs the enterprise agreement and BAA with Microsoft/Nuance. 2. IT configures the EHR integration (Epic or Cerner). 3. You download the DAX app on your phone and log in with your organization credentials.

Per-encounter workflow: 1. Open the DAX app before entering the exam room and select the encounter type (office visit, telehealth, procedure note, etc.). 2. Press record. DAX listens to the ambient conversation — you do not need to dictate or narrate directly to the device. 3. See the patient normally. The AI is processing in the background. 4. After the encounter, a structured note draft appears in your EHR within 60–90 seconds. 5. Review, edit if necessary, and sign.

Physicians using DAX report an average time savings of 7 minutes per note. Across 20 patient encounters per day, that is over 2 hours returned to direct patient care or personal time.


Use Case 2: Drafting Patient Education Materials with De-Identified Prompts

You can use ChatGPT, Claude, or Gemini to draft patient education content without any HIPAA risk — as long as you never include PHI. Here is an example workflow:

Safe prompt approach:

"Write a patient education handout for a 65-year-old patient with newly diagnosed type 2 diabetes. Include: what type 2 diabetes is, lifestyle changes (diet, exercise), medication overview (metformin), blood glucose monitoring basics, and when to call the doctor. Use a 6th-grade reading level. No jargon."

This prompt contains zero PHI. You are asking for general educational content, not clinical advice about a specific patient. Review the output, have a clinician verify clinical accuracy, and customize it for your practice's branding.


Use Case 3: Literature Review and Evidence Synthesis

For staying current with research, a practical workflow combines PubMed with a consumer AI:

  1. Search PubMed for your topic and identify 5–10 relevant abstracts.
  2. Copy the abstracts (no PHI involved — these are published papers).
  3. Paste into ChatGPT or Claude with a prompt like: "Summarize the key findings from these abstracts and identify any conflicting results or methodological limitations."
  4. Use the AI summary as a starting point, then read the full papers for any findings you plan to act on.

This workflow does not replace rigorous evidence review, but it can compress a 2-hour literature scan into 20 minutes. Always verify AI-generated summaries against the original papers before applying findings clinically.


What AI Cannot Do

Be clear-eyed about AI's limitations in clinical settings:

  • AI cannot diagnose. Clinical decision support tools can surface differentials, but the diagnosis belongs to the licensed clinician. No AI tool is FDA-cleared to diagnose without physician oversight.
  • AI cannot replace clinical judgment. An AI note draft may miss the nuance you observed in a patient's affect, gait, or non-verbal communication. Always review AI-generated notes as a starting draft, not a finished product.
  • AI cannot access real-time patient data without EHR integration. A consumer chatbot has no access to your patient's chart, labs, or medication list. It is working from what you tell it, nothing more.
  • AI can hallucinate. Language models can generate plausible-sounding but incorrect information — including wrong drug dosages, non-existent studies, or incorrect diagnostic criteria. Never use AI output for clinical decisions without verification.

Red Flags: What to Watch For

When evaluating any AI tool for clinical use:

  • No BAA available. This is a hard stop. No exceptions for PHI.
  • Vague data use policies. If the vendor cannot clearly state whether your data is used for model training, assume it is.
  • No SOC 2 Type II certification. This is the baseline security audit for software handling sensitive data.
  • Consumer-grade product pitched for clinical use. A general-purpose chatbot is not a clinical tool, regardless of how the sales pitch frames it.

Healthcare AI Tools: Quick Reference

Tool HIPAA Status BAA Available Primary Use Case Approximate Cost
Nuance DAX Copilot Compliant Yes Ambient documentation $500–$700/provider/month
Suki AI Compliant Yes Ambient documentation $300–$500/provider/month
Nabla Copilot Compliant Yes Ambient documentation Contact for pricing
Microsoft Azure OpenAI Compliant (with setup) Yes (Azure) Custom AI on PHI Consumption-based
AWS HealthLake Compliant Yes (AWS) Health data analytics Consumption-based
Google Cloud Healthcare API Compliant Yes (GCP) FHIR/DICOM data apps Consumption-based
ChatGPT Enterprise Compliant Yes General AI assistant $30/user/month
ChatGPT (free/Plus) Not compliant for PHI No De-identified tasks only Free / $20/month
Claude (free/Pro) Not compliant for PHI No De-identified tasks only Free / $20/month
Gemini (consumer) Not compliant for PHI No De-identified tasks only Free

Getting Started

The lowest-risk entry point for most physicians is ambient documentation. If your health system uses Epic or Cerner, ask your IT or informatics team whether DAX Copilot or Suki is already contracted. Many health systems have enterprise agreements in place that individual physicians do not know about.

If your system does not have an ambient documentation tool, you can start with de-identified AI use immediately — drafting patient education materials, summarizing literature, or preparing for case conferences — using any consumer AI tool. Keep PHI out of the prompt, and you are within HIPAA boundaries.

AI will not replace the physician. But the physician who knows how to use AI safely and effectively will have a significant advantage over the one who does not.