Federal and state governments have added more rules about how AI is used in healthcare prior authorization (PA) and utilization management (UM) over the last two years. These rules protect patient privacy, make sure medical decisions are fair, require clear information about AI’s role, and stop AI from making medical decisions on its own without human checks.
The U.S. Department of Health and Human Services (HHS), following an executive order in October 2023, must create plans and policies about AI in healthcare. This includes AI tools that predict outcomes and create content, which affect patient care, payment, and decisions. The government has to make sure AI systems are safe, accurate, and ethical.
The Centers for Medicare and Medicaid Services (CMS) has new rules for Medicare Advantage (MA) plans. The 2023 Medicare Advantage rule says MA organizations must make medical necessity decisions based on each patient’s situation, not just AI algorithms. This keeps decisions fair and follows patient privacy laws like HIPAA.
Also, the CMS Interoperability and Prior Authorization rule takes effect January 1, 2027. It requires payers to create a Prior Authorization API that gives PA decisions in 72 hours for urgent cases and seven days for regular ones. While AI can speed this up, humans must still be involved to avoid relying only on machines.
Different states have made laws about AI in healthcare PA and UM:
Colorado’s 2023 law requires impact assessments for “high risk” AI systems by 2026, says patients must be told when AI is used in decisions, and gives patients the right to appeal AI-based decisions.
California’s 2024 laws say providers need explicit patient consent before using AI and must have humans review utilization cases to prevent AI-only decisions.
Illinois’ changes say automated negative decisions must be based on evidence and reviewed by clinical peers, following standards by URAC or NCQA.
New York’s proposed bill wants to certify insurer AI tools to stop discrimination and require insurers to be open about how they use AI.
Because of these different state rules, health plans and medical practices must be careful, especially if they work in many states.
Regulators want organizations to clearly tell patients and providers when AI is part of their care decisions. Patients should know how AI helped decide their care and be able to appeal if needed. Equal treatment means using diverse data sets so AI is fair and doesn’t discriminate. Organizations also need to teach providers and patients about AI use and timelines for decisions to build trust.
A key rule is that trained human clinical reviewers must be involved. AI can help by reviewing cases, but doctors or nurses must check any negative decisions to make sure they are correct and fair. This rule helps prevent full trust in AI alone and keeps patients safe.
AI rules in healthcare change fast. This makes it hard for organizations to keep their systems and rules up to date. They need constant monitoring, reporting, and quality checks to make sure AI decisions are accurate and timely. Following federal and state rules and standards like URAC and NCQA is important.
Given these challenges, healthcare leaders and IT teams can use several methods to follow rules and improve AI-based PA and UM systems.
Healthcare groups should assign people or committees to watch federal and state rules regularly. Following CMS rules, state laws, and accreditation standards helps them prepare for new requirements. Getting ready early lowers the risk of fines and improves how smoothly the system runs.
Organizations can use data to check PA and UM processes. Measuring approval rates, appeal results, and how fast decisions happen helps evaluate AI tools clearly. Reports like those created for Johns Hopkins Health Care aid in quality improvement and verifying compliance.
Health plans and medical groups may set up Utilization Management Committees, as NCQA suggests, to watch data, fix problems, and manage coordination between providers, vendors, and payers.
Policies must make sure all negative medical decisions have a second review by qualified clinical staff. Clear processes that include human review with AI help meet CMS and state rules.
Open communication about AI use helps build trust and meets rules for disclosure. This means explaining how AI is used, getting patient consent as California law requires, and offering easy appeal options.
Before using AI tools, organizations should check their accuracy with real clinical data. Comparing AI results side-by-side with human reviews shows reliability. After use, ongoing checks find bias, delays, or mistakes. Regular audits and updates to training data help reduce errors and keep AI fair.
If third-party vendors handle UM, organizations should confirm these vendors are accredited by groups like NCQA. Accredited vendors follow rules that help lessen oversight burden and ensure consistent, rule-following UM decisions.
Changing workflows to match new interoperability rules, like by using PA APIs, makes submitting prior authorizations easier and faster. Automating parts of this reduces the workload on providers and staff and helps meet CMS deadlines.
Following examples like UnitedHealthcare, which removed about 20% of prior authorizations for non-urgent planned services, organizations can cut unnecessary PA steps. This speeds care, cuts delays, and lowers administrative work, focusing resources on cases needing the most checking.
Teaching providers and staff about changing PA/UM rules, AI abilities and limits, and submission processes supports correct and timely handling of requests. Clear patient communication about PA steps and AI use helps with openness and patient involvement.
AI plays important roles in supporting healthcare PA and UM beyond just following rules. It can help make these processes faster, more accurate, and consistent.
Blue Cross Blue Shield of Massachusetts showed that adding AI to their web portals allowed automatic processing of 88% of prior authorizations immediately. This cuts delays and manual errors common with old PA reviews.
AI can help check clinical data, match requests to medical guidelines, and flag cases needing human review. Automated tools find missing documents, errors, or possible fraud faster than people alone. This lets healthcare teams focus on complex cases needing clinical decisions.
AI helps meet CMS’s PA rules requiring urgent PA decisions in 72 hours and regular ones in seven days. By quickly gathering and studying needed facts, AI cuts wait times and helps organizations follow timeline rules.
Combining workflow automation and AI reports lets organizations watch how providers submit requests, find delays or rule breaks, and group providers by quality. This data can guide cutting unneeded PA steps and reward good performance.
Automating PA eases work for clinical and administrative staff. It also lowers patient frustration from delays or unclear decisions. Good workflow automation supports quicker care and better patient satisfaction.
Using required PA APIs joins payer and provider systems. AI uses this to automate submissions, track request status, and send instant or near-instant decisions. This supports full electronic PA workflows that meet upcoming CMS rules.
With AI and automation, healthcare groups can set up real-time dashboards to track decisions, appeal rates, and timing. This helps check rules are followed all the time and improves clear info for all involved.
Healthcare leaders and IT managers handling PA and UM in the U.S. face big challenges in using AI while following changing rules. Federal and state laws require openness, patient permission, human review, and protection from bias in AI-made decisions. Medical groups and health plans must use data tracking, regular testing, clear communication, and flexible workflows to meet these rules.
Technology partners who understand these rules and design tools that follow laws can help healthcare providers meet compliance needs while making utilization management more efficient and improving patient care.
The Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program final rule issued by CMS mandates that Medicare Advantage organizations ensure medical necessity determinations consider the specific individual’s circumstances and comply with HIPAA. AI can assist but cannot solely determine medical necessity, ensuring fairness and mechanisms to contest AI decisions.
Effective by January 1, 2027, this rule requires payers to implement a Prior Authorization Application Programming Interface (API) to streamline the PA process. Decisions must be sent within 72 hours for urgent requests and seven days for standard requests. AI may be deployed to comply with timing but providers must remain involved in decision-making.
Signed on October 30, 2023, it mandates HHS to develop policies and regulatory actions for AI use in healthcare, including predictive and generative AI in healthcare delivery, financing, and patient experience. It also calls for AI assurance policies to enable evaluation and oversight of AI healthcare tools.
Examples include Colorado’s 2023 act requiring impact assessments and anti-discrimination measures for AI systems used in healthcare decisions; California’s AB 3030 requiring patient consent for AI use and Senate Bill 1120 mandating human review of UM decisions; Illinois’ H2472 requiring clinical peer review of adverse determinations and evidence-based criteria; and pending New York legislation requiring insurance disclosures and algorithm certification.
Plans must navigate varying state and federal regulations, ensure AI systems do not result in discrimination, guarantee that clinical reviewers oversee adverse decisions, maintain transparency about AI use, and implement mechanisms for reviewing and contesting AI-generated determinations to remain compliant across jurisdictions.
Regulations emphasize that qualified human clinical reviewers must oversee and validate adverse decisions related to medical necessity to prevent sole reliance on AI algorithms, assuring fairness, accuracy, and compliance with legal standards in UM/PA processes.
AI systems must be tested on representative datasets to avoid bias and inaccuracies, with side-by-side comparisons to clinical reviewer decisions. After deployment, continuous monitoring of decision accuracy, timeliness, patient/provider complaints, and effectiveness is critical to detect and correct weaknesses.
Insurers and healthcare providers should disclose AI involvement in decisions to patients and providers, including how AI contributed to decisions, ensuring individuals are informed and entitled to appeal AI-generated determinations, promoting trust and accountability.
Engagement with regulators, healthcare providers, patient groups, and technology experts helps navigate regulatory complexities, develop ethical best practices, and foster trust, ensuring AI in UM/PA improves decision quality while adhering to evolving standards and patient rights.
Continuous review of regulatory changes, internal quality assurance, periodic audits for algorithm performance, adherence to clinical guidelines, and responsiveness to complaints are necessary to ensure AI systems remain compliant, fair, and effective in prior authorization and utilization management.