Federal and state agencies in the United States have been paying more attention to rules about AI in healthcare over the last two years.
These rules mainly affect areas like utilization management (UM), which decides if treatments are medically needed, and prior authorization (PA), which means approval from payers before certain treatments or medicines can be given.
On October 30, 2023, President Biden issued an Executive Order.
This order asked the U.S. Department of Health and Human Services (HHS) to create a plan for using AI in health and human services.
The plan aims to make sure AI tools are safe, reliable, clear, and follow existing laws like HIPAA that protect patient privacy.
Starting January 1, 2024, Medicare Advantage (MA) groups must follow new rules that stop them from making medical necessity decisions only based on AI.
They have to consider each person’s clinical situation.
This helps make decisions fair and limits bias from AI without enough human review.
By January 1, 2027, these groups must have a Prior Authorization Application Programming Interface (API).
This API will speed up the PA process and help providers and payers communicate better.
Some states have also made their own AI healthcare laws:
Other states, like New York, are thinking about rules to make AI use in utilization management more open and regulated.
AI has some good uses, but medical practices should watch out for several problems when adding AI systems:
Because of these challenges, healthcare leaders must use clear plans to follow rules and get the benefits of AI:
Rules about AI in healthcare change fast.
It is important to set up ways to watch federal rules from CMS and HHS, and state laws that affect your work.
Work with legal experts or consultants who know healthcare AI rules to check for new demands.
Regular audits inside your practice can find weak spots in AI use and privacy, so you can fix them quickly.
Create a system to sort AI by risk level.
High-risk AI, especially those influencing medical decisions or prior authorizations, need strong validation, clear explanations, and human review.
Check the AI data for fairness and how AI makes decisions.
Keep records of these steps.
This matches what experts advise: managing AI throughout its life, from design to use.
Since protecting patient data is critical, use strong data rules.
Apply anonymization and encryption in AI training and use.
Control who can access the data and keep logs of usage to stop misuse.
Tools like intelligent tokenization can keep data useful but private, supporting research and operations safely.
Because of laws like California’s AB 3030, patients must know when AI is part of their care and agree to it.
Update your patient materials and consent forms.
Train staff so they can explain AI clearly.
Keep records of patient consent and disclosures.
Make sure qualified people review AI-made decisions, especially for utilization management and prior authorizations.
Illinois requires clinical peers to take part in bad decisions.
Teams of clinicians, data experts, and compliance officers should watch AI outputs, find bias, and fix errors.
Because the Prior Authorization API must be ready by 2027, work early with payers to align your systems.
Clear communication helps make adoption smoother and keeps decisions on time.
Build relationships with regulators to make following new rules easier.
AI can help healthcare offices with automation, like phone systems and answering services.
For example, some companies use AI to handle calls and scheduling while following the rules.
Automating patient communication, appointments, referrals, and insurance checks can cut down on work.
But automation must follow safe practices:
Using AI for front-office tasks can make responses faster, lower dropped calls, and improve patient experience.
At the same time, following rules helps keep patient trust and meet legal requirements.
As U.S. healthcare providers start using AI more in medical and office work, rules will require more attention.
Experts say it is important to balance new technology use with ethics, patient privacy, and clear, responsible AI systems.
For example, Ashit Vora says managing compliance needs risk-based rules and standard processes to help healthcare groups.
Healthcare groups should make AI governance plans that cover security, fairness, clarity, and legal responsibilities.
Teams that include tech workers, doctors, compliance officers, and regulators will need to work together.
Finding and fixing bias through diverse data and regular reviews is important.
Protecting patient privacy by anonymizing and encrypting data helps build trust in AI.
While AI can reduce paperwork and improve care coordination, doctors and healthcare workers keep the final say in patient care, so human values stay central.
Medical administrators, owners, and IT managers who follow these rules and good practices will be better prepared to use AI to improve how they work, follow laws, and keep good care and privacy.
Using AI is no longer optional but necessary, with following rules as the base for future success.
Over the past two years, both federal and state agencies have begun regulating AI in healthcare, particularly in areas like utilization management (UM) and prior authorization (PA) to determine insurance coverage for necessary services.
The Executive Order requires the U.S. Department of Health and Human Services (HHS) to create a strategic plan for deploying AI in health services, including developing an AI assurance policy for evaluating AI tools.
The Medicare Advantage Policy Rule mandates that MA organizations base medical necessity determinations on individual circumstances rather than solely on algorithms, ensuring compliance with HIPAA and fairness in AI-driven decisions.
The new regulations from the Medicare Advantage Policy Rule will apply to MA coverage starting January 1, 2024, and include provisions for utilizing AI in the PA process.
This rule mandates that payers implement a Prior Authorization API by January 1, 2027, requiring timely decisions and involvement of providers in the decision-making process.
States like Colorado, California, Illinois, and New York have enacted various laws requiring transparency, consent, oversight, and assessments to prevent algorithmic discrimination in AI systems used in healthcare.
Colorado’s Consumer Protections in Interactions with AI Systems Act requires developers to avoid algorithmic discrimination and disclose AI decision impacts, along with conducting impact assessments by 2026.
This bill mandates healthcare providers to inform patients when AI is utilized in their care and to obtain explicit consent before using AI systems.
Stakeholders should consistently monitor regulatory developments, assess current processes, carefully integrate AI functionality, and engage with other parties to navigate complexities and establish best practices.
The regulatory environment around AI in healthcare is rapidly changing, requiring insurers to remain vigilant and adaptable to ensure compliance with new federal and state regulations.