More than half of doctors in the U.S. have used AI tools in their work or plan to use them soon, according to a 2023 survey by the American Medical Association. Hospitals like Mass General Brigham use several AI programs to find aneurysms, spot strokes, and predict heart attacks. AI tools in radiology have also helped reduce radiation exposure and find diseases earlier.
AI helps with diagnosis, patient monitoring, and drug development. But it also connects many systems together, which can create weaknesses. AI systems use a lot of sensitive health data and link to different digital platforms. This raises the chance of data breaches. From 2018 to 2023, health data breaches grew by 239%, and ransomware attacks increased by 278%, according to the Office for Civil Rights.
These attacks risk patient privacy, disrupt care, and can lead to costly fines and damage to reputation. For example, in 2024, a ransomware attack on Change Healthcare affected data of over 190 million patients and caused a $22 million ransom payment.
The risks from AI systems are many. AI depends on large sets of protected health information (PHI). This makes it a target for hackers who attack connected networks. AI models often need constant training and updates. If they are not kept up to date, they can lose accuracy, a problem called “model drift.” This can cause mistakes in diagnosis or treatment and may lead to legal or financial problems.
AI can also copy biases found in its training data. This may cause unfair care decisions. Such bias can lead to lawsuits under anti-discrimination laws. Sometimes faults in AI software have caused product recalls. For example, Medtronic had to recall its InPen app for insulin use after a software bug risked patient health.
A big question for healthcare providers is whether their current malpractice or cyber insurance covers AI-related incidents. Most old policies do not clearly include AI risks. This means coverage might be limited or uncertain. Insurance companies are reacting by excluding AI risks, adding limits, or making new products focused on AI liability.
Cyber insurance covers losses from cyber problems like data breaches, ransomware, and business interruptions. As AI becomes more common in healthcare, this insurance helps handle money risks linked to AI failures.
But coverage varies a lot. Some insurers include AI under general cyber risks. Others treat AI risks differently. They may leave them out of regular policies or offer limited coverage for AI events. Healthcare groups need to carefully check risks before buying insurance.
Munich Re Group introduced “aiSelf,” the first insurance product aimed directly at AI risks in business. It covers problems like AI model drift and AI-specific failures. This shows that insurers are starting to face AI challenges. aiSelf helps companies update AI models while staying protected.
Medical practices and hospitals should work with brokers who know about AI risks. Policies should clearly cover AI threats. Not telling insurers about AI use could cause claim problems. So, it’s important to check AI tools and risks carefully for insurance choices.
Cyberattacks on healthcare have grown a lot. In 2022, there were over 1,400 attacks per week. Ransomware and business email compromise (BEC) are major threats that cost millions and disrupt care. Attacks also target third-party software providers, which can delay treatments and keep patients in the hospital longer.
Hospitals lose about $900,000 each day during ransomware-caused outages, not counting ransom payments or fines. Because of this, health leaders realize regular IT security is not enough. They need good insurance for cyber risks too.
Healthcare groups should use multi-factor authentication, network separation, endpoint detection, and follow laws like HIPAA and GDPR to reduce breach risks. Insurers check these security steps when they write policies. Good security can lower insurance costs and help get better coverage.
HITRUST’s AI Security Assessment and Certification program helps healthcare organizations handle AI security challenges. It gives rules to follow that match standards like ISO, NIST, and OWASP.
Organizations with this certification have fewer breaches—only 0.64% reported breaches over two years. HITRUST helps show compliance and security to regulators, insurers, and patients. It also helps insurers judge AI risks better, which can lower premiums and improve insurance terms.
For medical practices using AI for telemedicine, diagnosis, or admin tasks, HITRUST certification gives confidence about security and makes insurance easier.
AI workflow automation in healthcare includes front-office phone systems, scheduling appointments, billing, and managing patient records. Automation can reduce mistakes, increase efficiency, and lower costs. Companies like Simbo AI make AI phone systems to help handle patient calls without extra staff.
But workflow automation adds cyber risks. Automated systems often access patient data and connect to health information systems. If these are hacked, lots of sensitive data can be exposed, causing breaches and other problems.
Healthcare leaders must check security controls on AI systems, like data encryption, access limits, and monitoring system activity. Splitting duties and using multi-layer authentication are important when using AI automation.
For insurance, it is important to include AI workflow tools in cyber risk checks. Insurers ask about these systems during policy reviews. Not sharing or managing AI risks well can make claims harder to win after a breach.
AI automation problems can also cause operational risks. For example, if an AI phone system shares patient information by mistake, it could break HIPAA rules and cause fines. Traditional insurance might not clearly cover such errors, pointing to the need for AI-specific insurance products.
By doing these steps, healthcare organizations can improve their security and get better insurance for AI-related data breach risks.
AI is growing in U.S. healthcare and brings both benefits and risks. Medical practices and health systems using AI must understand data breach risks and insurance coverage. More cyberattacks and data breaches can cause big financial and operational problems.
Cyber insurance provides financial protection but needs careful choosing and risk checking. Automation tools, like those from Simbo AI, help with workflows but add to AI cyber risks. Medical leaders and IT staff need to manage these risks and work with insurance brokers to get proper AI-inclusive insurance.
Programs like HITRUST AI Security Certification help improve security and meet insurer and regulator expectations. Healthcare providers who address AI risks carefully with good assessments, security, and insurance are better able to protect themselves and their patients in today’s digital world.
Key risks include data breaches, medical errors, and the potential for discrimination. AI tools increase interconnectivity, complicate data deidentification, and may replicate or introduce new errors, impacting patient safety. Additionally, AI can perpetuate biases in treatment recommendations, raising compliance concerns under anti-discrimination laws.
AI increases the risk of data breaches through heightened interconnectivity and the difficulty maintaining data deidentification, complicating compliance with regulations like HIPAA and the FTC’s Health Breach Notification Rule.
AI can replicate human errors or introduce new types of errors, such as malfunctioning surgical robots or misdiagnosing diseases. This may result in delayed care or unnecessary follow-ups.
Providers are primarily concerned about whether their malpractice insurance will cover AI tools, as existing policies may not adequately address the risks posed by AI technologies.
Cyber insurance may cover costs related to data breaches involving AI systems. However, coverage can vary significantly, especially regarding business interruption and specific exclusions related to AI.
Traditional property policies may cover damages caused by AI tools, such as surgical robots. However, insurers could argue that AI-specific accidents fall under exclusions that emerged after the silent cyber movement.
Causation will be crucial in liability claims; it may be unclear whether errors arise from human oversight or flaws in the AI tool itself, complicating legal accountability.
Insurers are starting to offer AI-specific coverage, such as Munich Re’s ‘aiSelf’ insurance for model drift, which recognizes that traditional policies may not cover new AI-related risks.
Providers should conduct comprehensive surveys of AI risks and ensure informed responses to insurance applications, as underwriters increasingly inquire about AI usage.
Insurers may argue that AI-related risks are not included in coverage, impose sublimits, or target existing policy language to limit liability for losses associated with AI.