Is ChatGPT HIPAA Compliant? Key Facts for Healthcare Pros

Learn whether ChatGPT meets HIPAA standards and what healthcare professionals must know about patient data privacy and security.

When it comes to patient data and privacy, the consequences of AI tools can become significant. Many healthcare professionals are asking: is ChatGPT HIPAA compliant?

ChatGPT is not HIPAA compliant because OpenAI does not sign a Business Associate Agreement and retains user inputs, creating unacceptable risks for handling PHI.

This article further explores what HIPAA compliance means, how ChatGPT manages sensitive data, and what healthcare organizations need to consider before using AI tools.

Understanding HIPAA and its core requirements

HIPAA, the Health Insurance Portability and Accountability Act, establishes national standards for protecting patient health information in the United States. Its rules are designed to safeguard patient trust and ensure that health data remains confidential and secure.

The Privacy Rule

The Privacy Rule protects all forms of Protected Health Information (PHI), which includes any information that can identify a patient and relates to their health or healthcare services. Patients have rights over their data, and healthcare providers must limit the use and disclosure of PHI to what is necessary for treatment, payment, and healthcare operations.

The Security Rule

Focusing on electronic PHI (ePHI), the Security Rule requires healthcare organizations to implement strong safeguards. These include administrative policies, physical protections, and technical controls such as encryption, access restrictions, and audit trails. These measures are essential to ensure that only authorized individuals can access sensitive information.

The Breach Notification Rule

When a data breach occurs, HIPAA requires transparency. The Breach Notification Rule mandates that affected individuals, government authorities, and sometimes the media must be notified within 60 days. This ensures accountability and prompt response to security incidents.

How ChatGPT handles data today

ChatGPT, developed by OpenAI, transforms how people interact with information, but its data handling practices raise important questions for healthcare.

  • Prompt and response retention: ChatGPT stores user inputs and its replies. For most users, this data may be kept indefinitely unless manually deleted.

  • Enterprise controls: Enterprise customers can choose to opt out of data retention, with data typically deleted after 30 days.

  • Security measures: Data is protected during transmission and storage using encryption protocols like AES-256 and TLS 1.2. Access controls are also in place to limit who can view stored information.

  • No Business Associate Agreement (BAA): OpenAI does not sign Business Associate Agreements with healthcare organizations as of mid-2025. Without a BAA, using ChatGPT to process PHI is not compliant with HIPAA requirements.

HIPAA compliance is not optional; it is a legal obligation. ChatGPT’s current limitations create several challenges for healthcare organizations:

  • No BAA: Without a signed BAA, healthcare providers cannot use ChatGPT to process, transmit, or store PHI without violating HIPAA.

  • Potential for breaches: Even with encryption, storing sensitive data outside direct organizational control increases the risk of unauthorized access.

  • AI inaccuracies: ChatGPT can produce information that appears accurate but is actually incorrect, a phenomenon sometimes called “hallucinated information.” In clinical settings, this can lead to serious errors.

  • Court-ordered data retention: If required by law, ChatGPT’s stored prompts and responses could be disclosed in legal proceedings, complicating compliance further.

Weighing the risks and limitations

Using ChatGPT for tasks involving PHI introduces significant risks:

  • Privacy and security risks: Any PHI entered into ChatGPT could be exposed, whether intentionally or accidentally.

  • Data de-identification: Removing identifying details from data—known as de-identification—is essential before using AI tools. However, this process can reduce the usefulness of AI for personalized healthcare tasks.

  • Clinical judgment: Overreliance on AI can diminish clinicians’ critical thinking, especially when the AI provides incorrect or misleading information.

  • Bias and fairness: AI models may reflect or amplify existing biases in data, potentially worsening healthcare disparities instead of reducing them.

Best practices for safe AI use in healthcare

Healthcare organizations should proceed carefully when considering AI tools. The following practices help protect both patients and providers:

  • Never share PHI unless a BAA is in place.

  • De-identify or anonymize all data before entering it into AI tools. De-identification means removing names, dates, and other details that could identify a patient.

  • Use strong encryption and strict access controls at every stage.

  • Develop clear policies, train staff on AI risks, and audit regularly.

  • Stay informed about legal changes and updates to OpenAI’s policies.

Best Practice

Description

Avoid PHI sharing

Do not input PHI into ChatGPT without a BAA

De-identify data

Remove names, dates, and other identifiers

Use strong encryption

Ensure data is encrypted in transit and at rest

Train staff

Educate teams on AI risks and compliance

Conduct audits

Regularly review AI use and data security

Monitor legal developments

Stay informed on HIPAA rules and vendor updates

Conclusion

ChatGPT is not HIPAA compliant and should not be used to process PHI in healthcare settings. Until OpenAI offers a BAA and addresses current compliance gaps, healthcare professionals must prioritize patient privacy, staff education, and consider alternative AI solutions built for HIPAA compliance. Protecting patient trust and meeting legal obligations must remain the top priority.

The Daily Prompt is brought to you by Prompt Perfect…

We use Prompt Perfect every day to craft clear, detailed, and optimized prompts for The Daily Prompt.

It ensures our prompts are structured, refined, and ready to generate the best AI responses possible.

If you want the same seamless experience, try the Unlimited Plan free for three days and see how much better your prompts can be with just one click.

Try it now and experience the difference.

Prompt Perfect Chrome Extension is exclusively available in Google Chrome Browser. It will not work in Edge, Brave, or other browsers.