AI systems in healthcare use complex software that handles sensitive patient data, including protected health information (PHI). Because of this, AI platforms can have new weaknesses that traditional cybersecurity measures may not fully cover. For example, AI models can be tricked by prompt injection attacks or manipulation. Using third-party AI service providers can also add risks in the software supply chain.
Medical practices face a growing number of cyberattacks aimed at AI systems that store or process patient data. Reports say software supply chain attacks in healthcare are expected to triple between 2021 and 2025. These attacks can cause disruptions, data breaches, and harm patient safety. In 2022, the average cost of a data breach in healthcare was $9.4 million, showing the high cost of cybersecurity failures.
The specific risks from AI in healthcare mean organizations need to rethink their cybersecurity plans and handle AI-related threats. AI can have software weaknesses hackers might use. AI systems may also share data or work under shared responsibility models, which make managing risk harder. So, healthcare groups must include their AI projects in wider cybersecurity plans that have ongoing risk checks and monitoring.
Right now, rules about AI in healthcare are still early. Few laws focus directly on AI, but rules like HIPAA apply to AI systems that handle PHI. The Department of Health and Human Services (HHS) set up an AI Task Force to guide AI rules, with goals like transparency, fairness, no discrimination, good governance, and stronger cybersecurity by 2025.
In 2023, the National Institute of Standards and Technology (NIST) released a Risk Management Framework (RMF) for AI. This guide helps healthcare groups find, analyze, and reduce AI risks to meet federal standards. Executive Order No. 14110 says healthcare groups must keep data private and be accountable when using AI.
The Federal Trade Commission (FTC) also watches over AI, especially unfair handling of personal information. Healthcare providers can be held responsible if AI tools misuse patient data under the FTC Act. New laws might require providers to explain how AI affects patient care, making transparency more important.
In short, healthcare managers need to keep up with AI rules and build compliance plans. This means checking current AI uses, changing policies for new laws, and improving how they govern data and AI decisions.
HIPAA limits how PHI can be used and shared. Covered groups and their partners must keep privacy and security safeguards. AI tools in clinical and administrative work must follow these privacy rules. Organizations need to make sure AI systems do not let unauthorized people see PHI while processing, storing, or sending data.
Good HIPAA compliance means mapping how PHI moves, watching who accesses AI data, and controlling permissions by user roles and needs. The challenge is using AI without causing accidental data leaks or security gaps. Healthcare managers should team up with IT and compliance groups to check AI vendors for HIPAA compliance and security.
HITRUST is a group known for strict cybersecurity and compliance standards. It has made AI risk assessments and security certification programs. HITRUST’s AI Risk Management Assessment helps healthcare groups check their AI risk controls and find gaps. The AI Security Certification shows how secure AI platforms are. In 2024, HITRUST-certified places reported a 99.41% rate of no data breaches. Using HITRUST methods helps show compliance and manage AI risks well.
Handling cybersecurity risks with AI in healthcare needs a clear and full approach. Recent best practices suggest a six-step process for good risk management in medical groups.
Other good practices are running tabletop exercises, planning for scenarios, keeping business running during disruptions, and getting advice from outside experts in AI and healthcare rules.
AI tools for front offices, like automated phone systems and smart answering services, change healthcare workflows. Companies such as Simbo AI offer these tools to handle patient calls, appointment setting, and information requests with AI.
These AI-powered systems help front offices work better and reduce work for staff. But they also raise specific security and compliance concerns. Since automated phone systems handle sensitive health information, it is important to protect voice data and related info from unauthorized access.
Medical offices using AI phone systems must make sure these tools follow HIPAA and other rules by:
With good management, AI workflows can improve patient care, cut wait times, and help staff work better. But weak security could lead to big fines or loss of patient trust.
HITRUST’s AI Assurance Working Group and certification programs focus on security needs of AI and automation risks. Healthcare groups using AI in administrative work can use these frameworks to strengthen protection and prove compliance to regulators and patients.
Good AI risk management in healthcare requires following known cybersecurity frameworks like the NIST Cybersecurity Framework (CSF). The NIST CSF 2.0 update focuses on strong incident response and offers advice on best practices, including for AI. Emergency planning, threat detection, and response steps in the CSF are needed to quickly react to AI-related security problems.
Boise State University uses the NIST CSF to protect controlled data like PHI. This shows how healthcare groups linked to schools can use the framework to improve cybersecurity. Health systems and medical offices can also apply NIST guidance to improve governance, risk monitoring, and handling of AI incidents.
HITRUST builds on the NIST CSF by adding standards like ISO, matching healthcare needs. Many big healthcare groups, such as UPMC and Snowflake, depend on HITRUST certification to confirm security and compliance of their IT systems. Positive feedback from leaders at these groups shows how useful these frameworks are in protecting patient data and meeting HIPAA rules.
Leadership plays a key role in managing AI cybersecurity risks and compliance. Senior leaders, IT managers, and compliance officers must work closely to set up governance that oversees AI use, risk checks, and mitigation planning. Clear policies and assigning responsibilities help stop security gaps and rule violations.
Ongoing education about AI risks, new cyber threats, and changing regulations is vital. As AI runs more healthcare tasks, training should include not just IT but also clinicians and office staff who use AI tools.
Healthcare providers should watch updates from HHS, FTC, and other regulators about AI rules. Keeping compliance plans flexible helps prepare for future laws and enforcement about AI.
Using AI in healthcare brings benefits but also complex security and compliance issues. Medical practice managers, owners, and IT staff in the United States should focus on clear risk management, follow frameworks like HITRUST and NIST, and have strong oversight to protect patient data and keep trust while using AI tools like front-office automation.
AI regulations in healthcare are in early stages, with limited laws. However, executive orders and emerging legislation are shaping compliance standards for healthcare entities.
The HHS AI Task Force will oversee AI regulation according to executive order principles, aimed at managing AI-related legal risks in healthcare by 2025.
HIPAA restricts the use and disclosure of protected health information (PHI), requiring healthcare entities to ensure that AI tools comply with existing privacy standards.
The Executive Order emphasizes confidentiality, transparency, governance, non-discrimination, and addresses AI-enhanced cybersecurity threats.
Healthcare entities should inventory current AI use, conduct risk assessments, and integrate AI standards into their compliance programs to mitigate legal risks.
AI can introduce software vulnerabilities and is exploited by bad actors. Compliance programs must adapt to recognize AI as a significant cybersecurity risk.
NIST’s Risk Management Framework provides goals to help organizations manage AI tools’ risks and includes actionable recommendations for compliance.
Section 5 may hold healthcare entities liable for using AI in ways deemed unfair or deceptive, especially if it mishandles personally identifiable information.
Pending bills include requirements for transparency reports, mandatory compliance with NIST standards, and labeling of AI-generated content.
Healthcare entities should stay updated on AI guidance from executive orders and HHS and be ready to adapt their compliance plans accordingly.