Under HIPAA, protected health information (PHI) includes any health information that can identify a person. This information is held or sent by healthcare providers, health plans, and clearinghouses. PHI can be electronic, written on paper, or spoken. It includes details about patients’ health conditions, care they receive, or payments for health services.
AI tools in healthcare, like systems that help doctors make decisions, imaging tools, and software for managing tasks, often need to access PHI to work well. For example, AI can review health records to help with diagnosis or manage appointment schedules. But HIPAA has strong rules about how this data can be used, especially when used for purposes other than direct patient care.
Using PHI for reasons other than treatment, payment, or healthcare operations (called TPO) is considered secondary use. Training AI models often falls in this category. In training, PHI helps improve computer programs that may assist future care or studies.
According to the HIPAA Privacy Rule:
This means healthcare providers must get clear patient permission before using PHI for AI in ways other than direct care. The consent must be written, specific, and explain how the data will be used and kept safe.
HIPAA makes a clear difference between “consent” and “authorization.” Consent is for normal uses within treatment or operations and usually does not need special paperwork. Authorization is written permission needed when PHI is used for other reasons.
A valid HIPAA authorization should include:
For AI training, providers must make sure that the patient’s authorization clearly covers use of their PHI to develop AI models. Without this, using PHI for AI is a violation of HIPAA rules.
In 2024, more doctors started using AI than before. But there were big data breaches, affecting millions of people. One breach involved 190 million records because of weaknesses in data handling. Another affected nearly half a million patients and came from an AI vendor’s system.
Breaking HIPAA rules about using PHI for AI can cause many problems:
Because of these risks, healthcare groups must be very careful when using PHI for AI.
HIPAA requires health providers to have contracts called Business Associate Agreements (BAAs) with outside companies that handle PHI, like AI vendors. These contracts make sure vendors follow HIPAA’s privacy and security rules.
Important parts of BAAs for AI include:
Picking AI vendors who meet these rules is important to protect patient data and follow the law.
One often missed step in HIPAA compliance is training staff about AI tools and PHI safety. Doctors, office workers, and IT staff need to know the risks of using AI with patient data.
Some common issues include:
Regular training focused on AI and data safety helps prevent these problems.
A study in the International Journal of Medical Informatics found several barriers and helpers to getting patient consent for AI use of health data.
Barriers include:
Things that help improve consent are:
The study said that besides formal consent, gaining public trust is important for using health data in AI. Clear communication helps medical offices build this trust.
Medical office tasks like scheduling and patient communication are using AI automations more often. Companies like Simbo AI make AI phone systems for these jobs.
These AI systems handle patient data, including PHI, so they must follow HIPAA rules. They might check patient identities, manage appointments, or give basic health info.
Medical office leaders and IT staff should:
When set up properly, AI tools can lower errors with PHI and reduce data breach risks from manual handling.
Medical office managers, owners, and IT staff in the U.S. face pressure to use AI for better care and efficiency. But they must follow HIPAA rules carefully, especially when using PHI for AI training.
Main points to remember:
As AI use grows in healthcare, understanding these rules helps keep patient information safe and respects privacy rights.
The primary categories include Clinical Decision Support Systems (CDSS), diagnostic imaging tools, and administrative automation. Each category processes protected health information (PHI), creating privacy risks such as improper disclosure and secondary data use.
BAAs legally bind AI vendors to use PHI only for permitted purposes, require safeguarding patient data, and mandate timely breach notifications. This ensures vendors maintain HIPAA compliance when receiving, maintaining, or transmitting health information.
PHI can be shared without patient authorization only for treatment, payment, or healthcare operations (TPO). Any other use, including marketing or AI model training involving PHI, requires explicit patient consent to avoid violations.
Breaches expose sensitive patient data, disrupt IT systems, reduce availability and quality of care by delaying appointments and treatments, and risk patient safety by restricting access to critical PHI.
Careful vendor selection is essential to prevent security breaches and legal liability. It includes requiring BAAs prohibiting unauthorized data use, enforcing strong cybersecurity standards (e.g., NIST protocols), and mandating prompt breach notifications.
Employees must understand AI-specific threats like unauthorized software (‘shadow IT’) and PHI misuse. Training enforces use of approved HIPAA-compliant tools, multi-factor authentication, and security protocols to reduce breaches and unauthorized data exposure.
Covered entities and business associates must ensure PHI confidentiality, integrity, and availability by identifying threats, preventing unlawful disclosure, and ensuring employee compliance with HIPAA law.
Secondary use of PHI for AI model training requires explicit patient authorization; otherwise, such use or disclosure is unauthorized and violates HIPAA, restricting vendors from repurposing data beyond TPO functions.
Providers should enforce rigorous vendor selection with strong BAAs, mandate cybersecurity standards, conduct ongoing employee training, and establish governance frameworks to balance AI benefits with privacy compliance.
Short breach notification timelines enable quick response to incidents, limiting lateral movement of threats within the network, minimizing disruptions to care delivery, and protecting PHI confidentiality, integrity, and availability.