Challenges and solutions for managing systemic risks in general purpose AI models used within healthcare environments under strict regulatory frameworks

General Purpose AI models are different from special AI systems because they can be used in many ways. For example, a GPAI model might help with patient calls, make medical documents, or assist with insurance claims. Using AI for many tasks brings various risks like mistakes, bias, security problems, and privacy issues.

Systemic risks mean problems that can affect the whole healthcare system if something goes wrong with the AI’s design, data, or use. Some common risks with GPAI models are:

  • Data Bias and Errors: If the data used to teach the AI is not varied or has mistakes, the AI might give wrong or unfair answers. This can affect medical decisions or office tasks.
  • Cybersecurity Vulnerabilities: Since AI handles sensitive patient information, a hack or security breach could cause data loss or leaked patient info.
  • Lack of Transparency: GPAI can work like a “black box,” where it is hard to see how decisions are made. This makes it tough to find problems and follow rules that require AI to explain itself.
  • Inadequate Human Oversight: If AI systems work too automatically without enough human checks, mistakes could happen and go unnoticed.
  • Regulatory Non-compliance: Not following rules can lead to fines and legal problems for healthcare providers.

Because of these risks, healthcare groups in the U.S. must have strong risk management and obey rules while using new AI technology.

The Regulatory Framework Affecting AI in Healthcare in the U.S.

The European Union has the EU AI Act that sets clear rules for AI by grouping it by risk and telling companies how to comply. The U.S. system is more complicated and less centralized.

Health organizations in the U.S. must follow federal laws like HIPAA, which protects patient records, and FDA rules about medical devices, including some AI software. There is also guidance from the Office of the National Coordinator for Health Information Technology (ONC).

Even though the U.S. does not have a law like the EU AI Act, agencies watch AI use carefully. They focus on:

  • Managing data so it is accurate and fair.
  • Having clear responsibilities for AI use.
  • Making sure humans check AI decisions and not relying only on AI.
  • Protecting patient health information with good security.

These ideas are becoming more like the EU AI Act, especially since AI companies work internationally and want to serve the U.S. market.

Challenges in Managing Systemic Risks in GPAI in Healthcare

1. Data Quality and Governance

A big challenge is making sure the data used to train GPAI models is good, correct, and matches the patients the healthcare provider serves. Bad or biased data can result in wrong AI results, which may harm patient care or scheduling.

Data from less represented groups may be missing or incomplete. This means AI might not work well for everyone. This is a bigger problem in U.S. healthcare systems with many different kinds of patients.

Good data rules are needed. This means checking data for mistakes, tracking where data comes from, fixing errors, and updating it as new data arrives.

2. Transparency and Explainability

GPAI models use complex math that can make decisions hard to understand for healthcare workers. This “black box” nature is risky when decisions affect patient health.

AI systems that clearly explain how they work can help doctors and patients trust them. But it is hard to get clear explanations from GPAI because they do many tasks in different departments.

Healthcare leaders should ask for AI tools that have clear guides about how the AI makes decisions, especially for important tasks like patient triage or choosing eligibility.

3. Ensuring Human Oversight

AI cannot replace humans in healthcare. Important AI uses, especially those that affect doctors’ decisions or patient priority, need humans to check, approve, or change AI advice.

Healthcare groups find it hard to set clear rules for human oversight. Practice owners and IT managers need to work together to make sure medical staff remain in charge, with AI helping but not taking full control.

4. Cybersecurity Threats

Using AI more opens up more chances for cyberattacks. AI that handles private patient data can be a target for hackers.

Healthcare providers must protect AI systems from unauthorized access, viruses, and data theft. Data breaches can cause serious legal and ethical issues. Organizations need strong security rules and must watch AI systems carefully.

5. Compliance with Emerging AI Laws and Standards

The U.S. doesn’t have a nationwide AI law like the EU yet, but states and agencies are making their own rules. This can make following regulations harder.

Healthcare leaders need to be ready for new laws that require AI to be safe, clear, and properly reported. They must keep good records about AI use, risks, incidents, and human checks.

Solutions for Managing Systemic Risks in GPAI in Healthcare

1. Establish a Comprehensive Risk Management Framework

Healthcare groups should build a risk management system for all steps in using AI. This includes:

  • Checking AI risks before adopting it, focusing on patient safety and privacy.
  • Watching AI outputs continuously to find errors or bias.
  • Keeping clear technical records that follow regulations.
  • Assigning roles for AI oversight in the healthcare team.

This planned way matches ideas from the EU AI Act and can help U.S. organizations adopt responsible AI.

2. Prioritize Data Quality Controls

Good data means setting rules to check and clean data used to train and test AI. This means auditing data for accuracy, removing duplicates or old entries, and monitoring if all groups are well represented.

Working with data experts or using AI tools to find bias can make the data better. Healthcare groups should include doctors and data scientists in checking datasets.

3. Demand Transparency and Documentation from AI Vendors

Practice owners and IT staff should ask AI makers to provide full documents explaining how the AI works, how it was trained, and its limits. This builds trust and helps meet new regulations.

The documents should also guide staff on how to use AI results carefully and when to apply human judgment.

4. Design AI Workflows with Human Oversight Built-In

Healthcare leaders should set up AI processes that require human review at key points. Medical staff should be able to question, fix, or reject AI advice when needed.

Training staff about AI and its limits will help them work well with AI tools.

5. Strengthen Cybersecurity Posture

Investing in AI-specific cybersecurity is important. This can include:

  • Regularly checking AI platforms for weaknesses
  • Encrypting sensitive data handled by AI
  • Limiting access to authorized users only
  • Having plans to handle security incidents related to AI

Since cyberattacks on healthcare are rising, acting early helps protect patient data and the organization’s reputation.

6. Prepare for Regulatory Compliance

Though U.S. rules are still changing, providers should follow best practices inspired by laws like the EU AI Act. This means:

  • Keeping track of AI decisions
  • Doing regular risk checks and compliance reviews
  • Joining industry codes or certifications for AI safety

Being ready for new laws will reduce legal risks and improve how an organization handles AI.

AI Integration in Workflow Automation for Healthcare Administration

Using GPAI to automate routine front-office tasks can make healthcare more efficient while keeping risks manageable.

Phone Call Automation and Patient Interactions

Some companies offer AI systems that answer phone calls, schedule appointments, refill prescriptions, and handle questions. This lowers the work for office staff and reduces wait times.

This kind of AI must follow rules like:

  • Protecting data privacy (such as HIPAA)
  • Being accurate to avoid miscommunication
  • Clearly showing when a call is handled by AI versus a human
  • Having humans step in for difficult or urgent cases

Success depends on strong risk controls and staff training so AI helps people instead of making work harder.

Workflow Streamlining Through AI-Assisted Documentation

GPAI can help organize patient records, summarize visits, and create billing codes. Workflows using AI must have clear roles where medical billers or admins review AI outputs before final use, to catch errors.

Integration with Existing Healthcare IT Systems

AI automation needs to work well with electronic health records (EHR) and scheduling software. IT managers should choose AI that supports secure data sharing via standards like HL7 or FHIR.

This helps avoid wrong data transfers, which cause big mistakes, and supports legal data handling rules in U.S. healthcare.

Closing Reflections

Managing systemic risks in General Purpose AI for healthcare in the U.S. requires many actions at once. Healthcare leaders, practice owners, and IT managers must handle data quality, openness, human checks, cybersecurity, and new rules carefully.

By using full risk management systems and adding AI carefully to daily tasks, healthcare providers can use AI’s benefits while keeping patients safe and making sure their systems work properly. Real-world examples show that AI front-office automation can help healthcare if it meets strict rules and is easy to use.

Careful planning, watching closely, and working together will be important for U.S. healthcare providers as they manage the challenges and chances of GPAI under regulations.

Frequently Asked Questions

What classification of AI risks does the EU AI Act define?

The EU AI Act classifies AI into unacceptable risk (prohibited), high-risk (regulated), limited risk (lighter transparency obligations), and minimal risk (unregulated). Unacceptable risks include manipulative or social scoring AI, while high-risk AI systems require strict compliance measures.

What obligations do providers of high-risk AI systems have?

Providers must implement risk management, ensure data governance with accurate datasets, maintain technical documentation, enable record-keeping for risk detection, provide clear user instructions, allow human oversight, ensure accuracy, robustness, cybersecurity, and establish a quality management system for compliance.

How does the AI Act regulate general purpose AI (GPAI) models?

GPAI providers must prepare technical documentation covering training, testing, and evaluation; provide usage instructions to downstream users; comply with copyright laws; and publish detailed training data summaries. Systemic risk GPAI models face further requirements including adversarial testing, incident reporting, and cybersecurity protection.

What constitutes ‘prohibited’ AI systems under the AI Act relevant to healthcare?

Prohibited AI includes systems deploying subliminal manipulation, exploiting vulnerabilities, biometric categorisation of sensitive attributes, social scoring, criminal risk assessment solely based on profiling, untargeted facial recognition scraping, and emotion inference in workplaces except for medical safety reasons.

What are the Annex III use cases relevant to healthcare triage AI systems?

AI systems used for health-related emergency call evaluation, triage prioritization, risk assessments in insurance, and profiling for health or economic status are high-risk use cases under Annex III, requiring strict compliance due to their profound impact on individual rights and outcomes.

How does the AI Act address transparency and user awareness in AI interactions?

For limited risk AI, developers and deployers must ensure end-users know they are interacting with AI, such as in chatbots. High-risk AI requires detailed technical documentation, instructions, and enabling human oversight to maintain transparency and accountability.

What role does human oversight play in high-risk AI systems for healthcare?

High-risk AI systems must be designed to allow deployers to implement effective human oversight, ensuring decisions influenced or made by AI, especially in triage, are reviewed by healthcare professionals to mitigate errors and uphold patient safety.

How are systemic risks defined and managed in GPAI models under the AI Act?

Systemic risk is indicated by training with compute above 10²⁵ FLOPs or high-impact capabilities. Managing this risk involves conducting adversarial testing, risk assessments, incident tracking, cybersecurity safeguards, and regular reporting to EU authorities to prevent widespread harm.

What enforcement mechanisms are in place for AI system compliance in healthcare?

The AI Office within the EU Commission monitors compliance, conducts evaluations, and investigates systemic risks. Providers must maintain documentation and respond to complaints. Non-compliance with prohibitions can lead to enforcement actions including banning or restricting AI applications.

Why is emotion recognition AI prohibited in workplaces except medical contexts?

Emotion recognition is banned except for medical or safety reasons to protect individual privacy and prevent misuse or discrimination. In healthcare triage, emotion detection is permissible if it supports medical diagnosis or safety, ensuring ethical use aligned with patient well-being.