The main problem with using generative AI tools such as ChatGPT in healthcare is following HIPAA rules. HIPAA protects the privacy and safety of patient information, especially electronic Protected Health Information (ePHI). ePHI means any health information that can identify a person and is stored or sent electronically. Healthcare providers and their partners, called covered entities, must make sure any service that handles ePHI follows HIPAA rules.
OpenAI, which created ChatGPT, does not sign Business Associate Agreements (BAAs) with healthcare groups or covered entities. BAAs are legal contracts required by HIPAA between covered entities and third parties that might see ePHI. Without a BAA, ChatGPT cannot process or store ePHI. Steve Alder, editor of The HIPAA Journal, says that because OpenAI won’t make BAAs, ChatGPT is not safe to use with protected health info in healthcare.
Even though ChatGPT can help with tasks like summarizing text or scheduling, it cannot use any ePHI because it is not HIPAA compliant. Also, ChatGPT’s answers may sometimes be wrong or incomplete. Healthcare workers must double-check its work when used in medical settings.
Other AI tools have been made to follow HIPAA rules. For example, Google’s Med-PaLM 2 supports HIPAA compliance and has a signed BAA. Other options like BastionGPT and CompliantGPT meet HIPAA rules by agreeing to protect privacy and security as required.
Healthcare groups face several risks if they use tools like ChatGPT with ePHI without proper agreements and security measures. These risks affect both the group and their patients.
One important idea from places like UC Berkeley is to classify data carefully when using AI. Data labeled Protection Level P1 is public information and usually safe to use with AI. But more sensitive data, like student records or health data protected by FERPA or HIPAA, must not be put into AI systems unless there are agreements that ensure privacy, security, and confidentiality.
In the United States, healthcare practices cannot use any patient information with ChatGPT unless it passes a risk review and there is a signed BAA. Data that does not identify patients, called de-identified PHI, can be used if handled carefully to stop it from being traced back to anyone.
Healthcare groups want to make their work easier while staying compliant and keeping good patient communication. AI tools can help if used properly and safely. AI for front-office tasks, like answering phones and scheduling, can reduce staff work and improve how providers connect with patients.
Simbo AI is a company that uses AI for front-office phone tasks while following privacy and compliance rules. It helps by answering patient calls and common questions first. This lowers wait times, cuts mistakes, and lets staff focus on other work without risking ePHI exposure to tools that do not follow HIPAA.
To use AI safely in healthcare workflows, organizations need to:
IT managers and administrators must watch for these points to keep their organizations safe from risks and legal problems linked to wrong AI use.
AI tools like ChatGPT are attractive because they help healthcare providers handle more paperwork and improve patient talks. But since ChatGPT is not HIPAA compliant and OpenAI does not sign BAAs, healthcare groups in the U.S. cannot use ChatGPT for any task with ePHI without taking big risks.
There are other AI platforms made to follow HIPAA and designed for healthcare work. These companies do security checks and sign BAAs so they can be safer choices for AI use.
The healthcare field keeps focusing on cybersecurity and data privacy. Recent studies show data breaches happen due to many factors and threats. Healthcare groups must manage risks carefully and follow rules. Ignoring this may lead to fines, hurt patient trust, and disrupt operations.
Healthcare leaders should pick AI tools that meet HIPAA and follow data policies. They should build a culture of security with ongoing training and clear rules about AI use. This way, AI can be helpful without putting the organization in danger.
This article offers information for healthcare leaders thinking about AI to help with office work. Knowing the rules and how to protect patient data is key to avoiding costly problems and keeping health information safe.
No, ChatGPT is not HIPAA compliant as OpenAI will not enter into a Business Associate Agreement with covered entities, making it unsuitable for use with electronic Protected Health Information (ePHI).
Organizations must undergo a security review and ensure a signed HIPAA-compliant Business Associate Agreement with the tool provider before using it in connection with ePHI.
Yes, ChatGPT can be used with de-identified PHI, which has been stripped of all personal identifiers and is no longer considered PHI under HIPAA.
Generative AI tools like BastionGPT and CompliantGPT can be used compliant with HIPAA, as their providers are willing to sign Business Associate Agreements.
Executing HIPAA-compliant agreements ensures that covered entities can legally share PHI with business associates and delineates their compliance obligations.
Using ChatGPT with ePHI without a Business Associate Agreement can violate HIPAA regulations, leading to legal penalties and loss of patient trust.
OpenAI will retain data sent via API for up to 30 days for monitoring purposes and delete it afterwards unless legally required to retain it.
Ongoing training is crucial because cyberthreats evolve, and all workforce members must be informed to recognize and report potential attacks effectively.
The minimum necessary standard requires that only the least amount of PHI needed to achieve a specific purpose should be used or disclosed to protect patient privacy.
Refresher training ensures that all members of the workforce are updated on changes, reducing the risk of inadvertent violations of HIPAA regulations.