In healthcare, some AI systems are called “high-risk” because of their important roles and the possible effects on patient health and safety. High-risk AI systems include tools for diagnosis, treatment advice, clinical documentation, and patient monitoring. For example, AI programs that help detect early signs of sepsis or improve breast cancer screening are high-risk systems. These AI tools must meet strict rules because mistakes can cause serious harm to patients.
People who manage medical practices, such as administrators, owners, and IT managers in the U.S., need to understand that using high-risk AI brings responsibilities. They must choose trustworthy AI products, safely add them to clinical work routines, and follow new regulations.
The United States does not yet have a complete federal law like the European Union’s AI Act. But the EU’s law gives useful ideas for U.S. healthcare groups planning to use AI. The European AI Act started on August 1, 2024. It is the first wide law to regulate AI in many areas, including healthcare. It sorts AI systems by risk level—from very risky to low risk—and sets rules for each group.
Even though the U.S. is slower to make formal AI laws, there is growing attention to FDA guidance and rules for AI medical devices. The European example shows important points for U.S. health systems and administrators to think about:
Trustworthiness is key for AI to work well in healthcare. Recent research on trustworthy AI shows that seven technical and ethical rules are important:
Medical practices in the U.S. can use these rules as guides to pick safer, fairer AI systems that staff and patients accept. These ideas also match calls from regulators for AI to be clear and fair.
A “responsible AI system” means it can be audited and held legally responsible. This is very important in healthcare, where patient safety and results come first.
AI affects not just clinical decisions but also helps with office work automation. For example, AI can run phone answering services and manage schedules. Companies like Simbo AI offer tools like these that help medical offices work better.
For healthcare administrators and IT managers in the U.S., using AI in office automation has two main benefits. It makes patient experience better and helps meet rules by cutting human errors in data and communication. AI phone systems respond quickly to patient needs and improve how the office runs.
Using AI in healthcare has some challenges that affect safety and trust:
EU projects like AICare@EU support solving these problems by funding work that tests AI models and shares best ways. U.S. healthcare leaders should watch these projects for future guidance.
The U.S. does not have a full AI law like the European AI Act yet. But signs show more rules are coming:
Medical administrators and IT managers should take these steps:
Healthcare crosses borders. Laws and cooperation by the EU, WHO Europe, OECD, G7, and G20 impact global trends. U.S. healthcare groups should follow these efforts. Harmonized rules might affect American providers working internationally or using AI products made abroad.
AI improves healthcare by enhancing resource allocation, reducing costs, automating administrative tasks, improving diagnostic accuracy, enabling personalized treatments, and accelerating drug development, leading to more effective, accessible, and economically sustainable care.
AI automates and streamlines medical scribing by accurately transcribing physician-patient interactions, reducing documentation time, minimizing errors, and allowing healthcare providers to focus more on patient care and clinical decision-making.
Challenges include securing high-quality health data, legal and regulatory barriers, technical integration with clinical workflows, ensuring safety and trustworthiness, sustainable financing, overcoming organizational resistance, and managing ethical and social concerns.
The AI Act establishes requirements for high-risk AI systems in medicine, such as risk mitigation, data quality, transparency, and human oversight, aiming to ensure safe, trustworthy, and responsible AI development and deployment across the EU.
EHDS enables secure secondary use of electronic health data for research and AI algorithm training, fostering innovation while ensuring data protection, fairness, patient control, and equitable AI applications in healthcare across the EU.
The Directive classifies software including AI as a product, applying no-fault liability on manufacturers and ensuring victims can claim compensation for harm caused by defective AI products, enhancing patient safety and legal clarity.
Examples include early detection of sepsis in ICU using predictive algorithms, AI-powered breast cancer detection in mammography surpassing human accuracy, and AI optimizing patient scheduling and workflow automation.
Initiatives like AICare@EU focus on overcoming barriers to AI deployment, alongside funding calls (EU4Health), the SHAIPED project for AI model validation using EHDS data, and international cooperation with WHO, OECD, G7, and G20 for policy alignment.
AI accelerates drug discovery by identifying targets, optimizes drug design and dosing, assists clinical trials through patient stratification and simulations, enhances manufacturing quality control, and streamlines regulatory submissions and safety monitoring.
Trust is essential for acceptance and adoption of AI; it is fostered through transparent AI systems, clear regulations (AI Act), data protection measures (GDPR, EHDS), robust safety testing, human oversight, and effective legal frameworks protecting patients and providers.