General Purpose AI models are different from special AI systems because they can be used in many ways. For example, a GPAI model might help with patient calls, make medical documents, or assist with insurance claims. Using AI for many tasks brings various risks like mistakes, bias, security problems, and privacy issues.
Systemic risks mean problems that can affect the whole healthcare system if something goes wrong with the AI’s design, data, or use. Some common risks with GPAI models are:
Because of these risks, healthcare groups in the U.S. must have strong risk management and obey rules while using new AI technology.
The European Union has the EU AI Act that sets clear rules for AI by grouping it by risk and telling companies how to comply. The U.S. system is more complicated and less centralized.
Health organizations in the U.S. must follow federal laws like HIPAA, which protects patient records, and FDA rules about medical devices, including some AI software. There is also guidance from the Office of the National Coordinator for Health Information Technology (ONC).
Even though the U.S. does not have a law like the EU AI Act, agencies watch AI use carefully. They focus on:
These ideas are becoming more like the EU AI Act, especially since AI companies work internationally and want to serve the U.S. market.
A big challenge is making sure the data used to train GPAI models is good, correct, and matches the patients the healthcare provider serves. Bad or biased data can result in wrong AI results, which may harm patient care or scheduling.
Data from less represented groups may be missing or incomplete. This means AI might not work well for everyone. This is a bigger problem in U.S. healthcare systems with many different kinds of patients.
Good data rules are needed. This means checking data for mistakes, tracking where data comes from, fixing errors, and updating it as new data arrives.
GPAI models use complex math that can make decisions hard to understand for healthcare workers. This “black box” nature is risky when decisions affect patient health.
AI systems that clearly explain how they work can help doctors and patients trust them. But it is hard to get clear explanations from GPAI because they do many tasks in different departments.
Healthcare leaders should ask for AI tools that have clear guides about how the AI makes decisions, especially for important tasks like patient triage or choosing eligibility.
AI cannot replace humans in healthcare. Important AI uses, especially those that affect doctors’ decisions or patient priority, need humans to check, approve, or change AI advice.
Healthcare groups find it hard to set clear rules for human oversight. Practice owners and IT managers need to work together to make sure medical staff remain in charge, with AI helping but not taking full control.
Using AI more opens up more chances for cyberattacks. AI that handles private patient data can be a target for hackers.
Healthcare providers must protect AI systems from unauthorized access, viruses, and data theft. Data breaches can cause serious legal and ethical issues. Organizations need strong security rules and must watch AI systems carefully.
The U.S. doesn’t have a nationwide AI law like the EU yet, but states and agencies are making their own rules. This can make following regulations harder.
Healthcare leaders need to be ready for new laws that require AI to be safe, clear, and properly reported. They must keep good records about AI use, risks, incidents, and human checks.
Healthcare groups should build a risk management system for all steps in using AI. This includes:
This planned way matches ideas from the EU AI Act and can help U.S. organizations adopt responsible AI.
Good data means setting rules to check and clean data used to train and test AI. This means auditing data for accuracy, removing duplicates or old entries, and monitoring if all groups are well represented.
Working with data experts or using AI tools to find bias can make the data better. Healthcare groups should include doctors and data scientists in checking datasets.
Practice owners and IT staff should ask AI makers to provide full documents explaining how the AI works, how it was trained, and its limits. This builds trust and helps meet new regulations.
The documents should also guide staff on how to use AI results carefully and when to apply human judgment.
Healthcare leaders should set up AI processes that require human review at key points. Medical staff should be able to question, fix, or reject AI advice when needed.
Training staff about AI and its limits will help them work well with AI tools.
Investing in AI-specific cybersecurity is important. This can include:
Since cyberattacks on healthcare are rising, acting early helps protect patient data and the organization’s reputation.
Though U.S. rules are still changing, providers should follow best practices inspired by laws like the EU AI Act. This means:
Being ready for new laws will reduce legal risks and improve how an organization handles AI.
Using GPAI to automate routine front-office tasks can make healthcare more efficient while keeping risks manageable.
Some companies offer AI systems that answer phone calls, schedule appointments, refill prescriptions, and handle questions. This lowers the work for office staff and reduces wait times.
This kind of AI must follow rules like:
Success depends on strong risk controls and staff training so AI helps people instead of making work harder.
GPAI can help organize patient records, summarize visits, and create billing codes. Workflows using AI must have clear roles where medical billers or admins review AI outputs before final use, to catch errors.
AI automation needs to work well with electronic health records (EHR) and scheduling software. IT managers should choose AI that supports secure data sharing via standards like HL7 or FHIR.
This helps avoid wrong data transfers, which cause big mistakes, and supports legal data handling rules in U.S. healthcare.
Managing systemic risks in General Purpose AI for healthcare in the U.S. requires many actions at once. Healthcare leaders, practice owners, and IT managers must handle data quality, openness, human checks, cybersecurity, and new rules carefully.
By using full risk management systems and adding AI carefully to daily tasks, healthcare providers can use AI’s benefits while keeping patients safe and making sure their systems work properly. Real-world examples show that AI front-office automation can help healthcare if it meets strict rules and is easy to use.
Careful planning, watching closely, and working together will be important for U.S. healthcare providers as they manage the challenges and chances of GPAI under regulations.
The EU AI Act classifies AI into unacceptable risk (prohibited), high-risk (regulated), limited risk (lighter transparency obligations), and minimal risk (unregulated). Unacceptable risks include manipulative or social scoring AI, while high-risk AI systems require strict compliance measures.
Providers must implement risk management, ensure data governance with accurate datasets, maintain technical documentation, enable record-keeping for risk detection, provide clear user instructions, allow human oversight, ensure accuracy, robustness, cybersecurity, and establish a quality management system for compliance.
GPAI providers must prepare technical documentation covering training, testing, and evaluation; provide usage instructions to downstream users; comply with copyright laws; and publish detailed training data summaries. Systemic risk GPAI models face further requirements including adversarial testing, incident reporting, and cybersecurity protection.
Prohibited AI includes systems deploying subliminal manipulation, exploiting vulnerabilities, biometric categorisation of sensitive attributes, social scoring, criminal risk assessment solely based on profiling, untargeted facial recognition scraping, and emotion inference in workplaces except for medical safety reasons.
AI systems used for health-related emergency call evaluation, triage prioritization, risk assessments in insurance, and profiling for health or economic status are high-risk use cases under Annex III, requiring strict compliance due to their profound impact on individual rights and outcomes.
For limited risk AI, developers and deployers must ensure end-users know they are interacting with AI, such as in chatbots. High-risk AI requires detailed technical documentation, instructions, and enabling human oversight to maintain transparency and accountability.
High-risk AI systems must be designed to allow deployers to implement effective human oversight, ensuring decisions influenced or made by AI, especially in triage, are reviewed by healthcare professionals to mitigate errors and uphold patient safety.
Systemic risk is indicated by training with compute above 10²⁵ FLOPs or high-impact capabilities. Managing this risk involves conducting adversarial testing, risk assessments, incident tracking, cybersecurity safeguards, and regular reporting to EU authorities to prevent widespread harm.
The AI Office within the EU Commission monitors compliance, conducts evaluations, and investigates systemic risks. Providers must maintain documentation and respond to complaints. Non-compliance with prohibitions can lead to enforcement actions including banning or restricting AI applications.
Emotion recognition is banned except for medical or safety reasons to protect individual privacy and prevent misuse or discrimination. In healthcare triage, emotion detection is permissible if it supports medical diagnosis or safety, ensuring ethical use aligned with patient well-being.