AI technologies like machine learning and generative AI help with data analysis, clinical decisions, administrative tasks, and talking with patients. Hospitals and clinics use AI for managing electronic health records (EHR), reading medical images, predicting risks, and even answering phones.
But using AI can cause problems with patient safety, privacy, fairness, and transparency. Badly made AI might cause mistakes, treat some patients unfairly, or leak private health information. Because of these issues, government groups are creating rules to make sure AI is used safely in healthcare.
On October 30, 2023, President Joe Biden signed an order to create an AI Task Force inside the Department of Health and Human Services (HHS). The Task Force must be set up by January 28, 2024. It has one year to make a plan for how to regulate AI in healthcare and human services.
The order shows that AI is growing quickly in healthcare but there is no clear way to manage its risks yet. The Task Force aims to support new AI ideas while keeping patients safe and protected.
The HHS AI Task Force has leaders from different HHS agencies, including:
These agencies have expertise in drug and device safety, public health, health IT, and research. The Task Force also creates special groups to focus on issues like biosecurity, ethics, and monitoring AI’s real-world use.
The group will meet with private companies and ask the public for input to learn more about how AI is used and the problems faced.
The Task Force must make a plan within one year that covers AI rules for:
Within 180 days after the executive order, the HHS secretary must create a quality assurance policy. This policy will guide how to check AI tools before and after they go on the market. It will include watching real-life use to catch problems early and improve safety.
For example, CMS already stopped Medicare Advantage plans from using some AI systems to make broad coverage decisions because they need closer control.
By September 30, 2024, HHS must start an AI Safety Program for healthcare. This program will work with patient safety groups to:
This safety program shows that AI is not perfect and errors need to be tracked closely as these tools change fast.
AI is playing a bigger role in finding and testing new drugs. The Task Force must make rules for these AI tools. They need to figure out where current FDA rules work and where new guidance is needed to keep AI transparent, repeatable, and ethical in drug research.
Healthcare providers in the U.S. are using AI to not only help with medical care but also to automate routine office tasks. AI helps reduce stress on staff, makes patients’ experience better, and makes offices run smoother.
One use is AI-powered phone answering and scheduling services. For example, some tools can handle patient calls and appointments automatically. This cuts wait times, reduces mistakes with patient info, and frees staff to focus on harder tasks.
The rules made by the HHS Task Force will affect healthcare providers using AI this way. AI systems that handle private health info must follow HIPAA rules to keep data safe. It is also important to be open about how AI uses data and decides things to build trust with patients and staff.
Healthcare leaders and IT managers should list all AI tools they use, especially those automating work and patient contact. They should also get ready to follow new rules coming from the Task Force.
Right now, AI rules in U.S. healthcare are just starting. The European Union has already made strict AI laws, but the U.S. is trying to balance safety with new ideas.
Several HHS agencies help watch over AI:
But many AI tools that change and learn all the time challenge these old rules. The Task Force must make new methods to monitor and manage risks for these changing AI systems.
The HHS AI Task Force needs to make sure AI keeps patient privacy safe and does not cause unfair treatment. Healthcare providers getting federal money must follow laws against discrimination when using AI.
AI can accidentally include biases that lead to unequal care or denying services to some patients. The executive order says transparency is very important. Healthcare groups must be able to find and fix these problems early.
Patient privacy is a major focus since AI often needs access to private health info. HIPAA rules still apply, and there may be more rules in the future for better AI privacy protection. The Federal Trade Commission also works to keep data use fair and stop scams, especially when AI uses personal data.
Healthcare leaders should prepare for new rules while using AI in their practices. Some actions include:
Following these steps will help reduce legal and operation risks while using AI to make care better and more efficient.
The executive order asks for teamwork beyond HHS. The Department of Defense and Department of Veterans Affairs are involved, especially with AI research for veterans’ healthcare.
The National Institute of Standards and Technology (NIST) made a voluntary AI Risk Management Framework in 2023. This will grow to include guidance on generative AI to help healthcare manage AI risks.
The White House Office of Management and Budget (OMB) has draft policies asking all federal agencies to appoint Chief AI Officers. These officers will help make sure AI is used responsibly and control risks in their agencies.
The creation of the HHS AI Task Force is the first big federal step to manage AI in healthcare. Medical practices in the U.S. need to understand the Task Force’s work and get ready for new rules.
AI is changing both simple office work and complicated medical tasks. As AI becomes a bigger part of healthcare daily work, rules will help make sure these tools improve patient care safely and fairly while protecting privacy.
Healthcare providers should expect new requirements for paperwork, quality checks, and safety monitoring. Staying involved with new rules and updating internal policies will help healthcare organizations adjust and work well with this new AI oversight.
AI regulations in healthcare are in early stages, with limited laws. However, executive orders and emerging legislation are shaping compliance standards for healthcare entities.
The HHS AI Task Force will oversee AI regulation according to executive order principles, aimed at managing AI-related legal risks in healthcare by 2025.
HIPAA restricts the use and disclosure of protected health information (PHI), requiring healthcare entities to ensure that AI tools comply with existing privacy standards.
The Executive Order emphasizes confidentiality, transparency, governance, non-discrimination, and addresses AI-enhanced cybersecurity threats.
Healthcare entities should inventory current AI use, conduct risk assessments, and integrate AI standards into their compliance programs to mitigate legal risks.
AI can introduce software vulnerabilities and is exploited by bad actors. Compliance programs must adapt to recognize AI as a significant cybersecurity risk.
NIST’s Risk Management Framework provides goals to help organizations manage AI tools’ risks and includes actionable recommendations for compliance.
Section 5 may hold healthcare entities liable for using AI in ways deemed unfair or deceptive, especially if it mishandles personally identifiable information.
Pending bills include requirements for transparency reports, mandatory compliance with NIST standards, and labeling of AI-generated content.
Healthcare entities should stay updated on AI guidance from executive orders and HHS and be ready to adapt their compliance plans accordingly.