AI technologies in healthcare include many uses such as tools for diagnosing diseases, helping with surgery, automating appointment scheduling, and monitoring patients remotely. These tools can make healthcare more efficient and effective, but they also bring new cybersecurity risks that are not common with older software systems.
One important risk is protecting sensitive health information called protected health information (PHI). Healthcare organizations that follow HIPAA rules must make sure any AI systems handling PHI meet strict privacy and security standards. AI adds challenges like keeping data safe from unauthorized access, storing and sending data securely, and stopping data leaks caused by AI weaknesses.
AI systems also face new threats like prompt injection attacks. In these attacks, bad users give carefully made input data to trick the AI or get it to reveal private information. These attacks target the AI’s complexity and the fact that AI decisions are not always clear.
HITRUST, a key group in healthcare cybersecurity, says it is important to check risks early and create plans to reduce them. HIPAA rules by themselves do not fully cover AI-specific problems, so it is important to use stronger security frameworks.
Another concern is that AI systems can be used as entry points for ransomware and other attacks. When AI connects with other healthcare systems, it can create new weak spots. If the AI software is not tested and watched closely, hackers might use these weak spots to access hospital networks or patient data.
Bias and fairness also matter for legal and ethical reasons. AI models trained on biased data might give unfair or wrong results. This can cause legal problems under federal laws about non-discrimination in AI. Healthcare providers must make sure AI tools are clear in how they work and that humans check the results.
Healthcare organizations in the U.S. must follow HIPAA rules to protect PHI, but these rules do not fully cover all AI issues. The Department of Health and Human Services (HHS) created an AI Task Force to develop regulations and make sure AI systems follow the law by 2025. This matches Executive Order No. 14110, which sets rules for AI like being clear about how it works, strong management, stopping discrimination, and better cybersecurity.
The National Institute of Standards and Technology (NIST) made the Risk Management Framework (RMF) for AI. This helps organizations find and manage risks that are special to AI systems. The RMF includes advice on handling risks from algorithms, detecting bias, and cybersecurity controls. Healthcare groups are encouraged to use NIST’s RMF to build their AI compliance programs.
Besides HIPAA and NIST, laws like the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 suggest that healthcare groups must reveal how AI affects care access. These laws also require good AI governance.
The Federal Trade Commission (FTC) may hold healthcare groups responsible for unfair or deceptive AI practices under Section 5 of the FTC Act. This is especially true if patient data is mishandled.
Because these rules are changing, healthcare managers must keep updating their compliance programs to include AI risks. This means making regular lists of AI uses, finding weaknesses, and training staff on new compliance rules.
HITRUST is a main organization in healthcare information security. It offers special programs to help healthcare providers handle the cybersecurity risks from AI. Their AI Risk Management Assessment helps organizations check how well their AI security plans find and reduce risks.
HITRUST also provides an AI Security Assessment and Certification. This gives a standard for safely using AI technologies. The certification proves that AI systems meet strong security rules, including protection against attacks like prompt injections, misuse of data, and unethical AI actions.
HITRUST recommends a full approach to AI security that goes beyond just technical controls. This includes physical security, worker training, and management. Leaders at HITRUST, like Chief Innovation Officer Jeremy Huval and IT Audit Director Iddah Mwaniki, say AI security is the responsibility of the whole organization. It needs teamwork between technical teams and leadership.
HITRUST’s AI Assurance Working Group shows their ongoing work to handle AI security challenges. They create rules that promote responsible AI use, legal compliance, and risk reduction.
AI workflow automation is becoming common in medical offices and healthcare groups. AI can automate tasks like scheduling appointments, sending patient reminders, sorting phone calls, and answering patient questions. This helps staff work more efficiently and reduces their workload.
But when these automated systems handle patient data, they must be watched carefully to avoid security problems. AI that works with phone tasks or patient communications must follow data privacy rules and reduce risks from cyber attacks.
Automation that connects to electronic health records (EHR) or other IT systems should use strong encryption, access controls, and have audit trails. For managers, knowing how AI tools connect with other systems is important to stop accidental data leaks or unauthorized access.
Good monitoring of AI workflow automation can find strange behavior that may show security threats. Regular checks make sure automated decisions follow expected ethical and legal rules, which reduces the risk of bias or mistakes. IT staff and compliance officers must work together to keep AI systems aligned with healthcare laws.
Because AI rules in healthcare change fast, administrators and IT managers must keep learning and update policies as needed. The HHS AI Task Force aims to set AI regulations by 2025, so organizations should prepare now to avoid last-minute problems.
Managers should watch new laws, executive orders, and industry rules to get ready for changes. Working with legal experts in healthcare technology helps understand and apply these changes.
Healthcare staff dealing with AI systems also need training on new cybersecurity risks and ways to reduce them. Using resources from groups like HITRUST can improve understanding of AI threats and offer practical tools for managing them.
AI technologies can help healthcare improve, but they bring complex challenges in cybersecurity and rule-following. Medical practice managers, owners, and IT staff in the United States need to watch for risks from AI while using strong risk management plans. Following standards from NIST, using HITRUST’s assessments, and making AI use clear and legal—including AI workflow automations—are important steps. These steps help keep patient trust, protect sensitive information, and avoid penalties under changing healthcare rules.
AI regulations in healthcare are in early stages, with limited laws. However, executive orders and emerging legislation are shaping compliance standards for healthcare entities.
The HHS AI Task Force will oversee AI regulation according to executive order principles, aimed at managing AI-related legal risks in healthcare by 2025.
HIPAA restricts the use and disclosure of protected health information (PHI), requiring healthcare entities to ensure that AI tools comply with existing privacy standards.
The Executive Order emphasizes confidentiality, transparency, governance, non-discrimination, and addresses AI-enhanced cybersecurity threats.
Healthcare entities should inventory current AI use, conduct risk assessments, and integrate AI standards into their compliance programs to mitigate legal risks.
AI can introduce software vulnerabilities and is exploited by bad actors. Compliance programs must adapt to recognize AI as a significant cybersecurity risk.
NIST’s Risk Management Framework provides goals to help organizations manage AI tools’ risks and includes actionable recommendations for compliance.
Section 5 may hold healthcare entities liable for using AI in ways deemed unfair or deceptive, especially if it mishandles personally identifiable information.
Pending bills include requirements for transparency reports, mandatory compliance with NIST standards, and labeling of AI-generated content.
Healthcare entities should stay updated on AI guidance from executive orders and HHS and be ready to adapt their compliance plans accordingly.