As artificial intelligence (AI) becomes more common in healthcare in the United States, its use in utilization management (UM) and medical necessity decisions faces a lot of attention. AI can make things quicker and more efficient, especially with prior authorization (PA) and utilization review. But new federal and state rules say that AI cannot make medical necessity decisions on its own. Qualified human clinical reviewers must check AI decisions to make sure they are fair, accurate, and follow ethical rules. This article explains why human review is still needed when AI is used in UM, describes the regulations about AI in healthcare, and shows how healthcare providers can use AI and human review together well.
Utilization management is an important healthcare task. It decides if medical services or treatments are really needed. The goal is to give patients proper care while managing costs. Recently, AI tools have been used to make these decisions faster and more consistent. AI can quickly analyze a lot of patient data and apply medical rules automatically.
AI helps by reducing paperwork, speeding up prior authorization decisions, and making processes more uniform. For example, the Centers for Medicare & Medicaid Services (CMS) made a rule in 2024. This rule requires payers to use a special prior authorization Application Programming Interface (API) by January 1, 2027. This API helps share approval or denial decisions quickly — within 72 hours for urgent requests and seven days for standard ones. AI can help payers meet these deadlines by automating parts of the process.
Even with AI’s benefits, federal and state rules require that human clinical reviewers be involved in medical necessity decisions linked to AI in UM and PA. This rule is to avoid relying only on machines, which may miss important details that a clinical professional understands.
These rules show that fairness and accuracy are important. People agree AI can help with utilization management but human reviewers must confirm any negative decisions to keep rules and ethics in place, reduce bias, and follow the law.
AI works by finding patterns in large sets of data, but it might miss a patient’s full medical history or social situation. Human reviewers can add this important context when making decisions.
AI systems can carry biases from their training data or development. This may cause unfair results for minorities or uncommon cases. Human reviewers can check AI suggestions and fix biased results to support fairness.
Patients and doctors have the right to know how care decisions are made. Human reviewers provide a way to explain and appeal AI decisions. They also make sure someone is responsible, which helps build trust.
Rules require human clinical review to protect privacy (HIPAA), consider individual cases, and verify decisions based on evidence. Without this, organizations risk legal problems and rejected AI decisions.
Rules about informed consent and patient privacy need human oversight. Strong management ensures AI is used responsibly and respects patient rights. For example, California and Colorado require patients to be told about AI use and keep rights to question AI-influenced decisions.
AI in UM, especially in prior authorization and medical necessity checks, often comes with workflow automation to make the process smoother. But AI should support, not replace, human clinical decisions.
AI often helps by:
Automation cuts down delays and lets human reviewers focus on complex cases needing judgment. Experts at the U.S. Department of Health and Human Services and CMS say that using AI alongside human review and automation can improve decision accuracy while following rules. This method also helps payers and providers meet CMS deadlines for PA decisions, which is important for smooth workflows under current laws.
Because AI and rules keep changing, healthcare groups must have ongoing quality checks for AI-assisted UM.
Healthcare managers, especially those in the U.S. running medical practices, must think about these rules and ethics when using AI in utilization management:
In the United States, medical necessity decisions in utilization management are getting more regulated to make sure AI use is fair, correct, and clear. Human clinical reviewers have a key role in checking AI decisions to avoid unfair results and follow new rules. AI and automation tools help by making the process faster and more consistent. But they cannot replace human judgment and review.
Healthcare managers, owners, and IT staff should build systems that use AI responsibly alongside clinical review, keep things open, and check quality regularly. This will meet legal needs and keep a focus on patient care.
The Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program final rule issued by CMS mandates that Medicare Advantage organizations ensure medical necessity determinations consider the specific individual’s circumstances and comply with HIPAA. AI can assist but cannot solely determine medical necessity, ensuring fairness and mechanisms to contest AI decisions.
Effective by January 1, 2027, this rule requires payers to implement a Prior Authorization Application Programming Interface (API) to streamline the PA process. Decisions must be sent within 72 hours for urgent requests and seven days for standard requests. AI may be deployed to comply with timing but providers must remain involved in decision-making.
Signed on October 30, 2023, it mandates HHS to develop policies and regulatory actions for AI use in healthcare, including predictive and generative AI in healthcare delivery, financing, and patient experience. It also calls for AI assurance policies to enable evaluation and oversight of AI healthcare tools.
Examples include Colorado’s 2023 act requiring impact assessments and anti-discrimination measures for AI systems used in healthcare decisions; California’s AB 3030 requiring patient consent for AI use and Senate Bill 1120 mandating human review of UM decisions; Illinois’ H2472 requiring clinical peer review of adverse determinations and evidence-based criteria; and pending New York legislation requiring insurance disclosures and algorithm certification.
Plans must navigate varying state and federal regulations, ensure AI systems do not result in discrimination, guarantee that clinical reviewers oversee adverse decisions, maintain transparency about AI use, and implement mechanisms for reviewing and contesting AI-generated determinations to remain compliant across jurisdictions.
Regulations emphasize that qualified human clinical reviewers must oversee and validate adverse decisions related to medical necessity to prevent sole reliance on AI algorithms, assuring fairness, accuracy, and compliance with legal standards in UM/PA processes.
AI systems must be tested on representative datasets to avoid bias and inaccuracies, with side-by-side comparisons to clinical reviewer decisions. After deployment, continuous monitoring of decision accuracy, timeliness, patient/provider complaints, and effectiveness is critical to detect and correct weaknesses.
Insurers and healthcare providers should disclose AI involvement in decisions to patients and providers, including how AI contributed to decisions, ensuring individuals are informed and entitled to appeal AI-generated determinations, promoting trust and accountability.
Engagement with regulators, healthcare providers, patient groups, and technology experts helps navigate regulatory complexities, develop ethical best practices, and foster trust, ensuring AI in UM/PA improves decision quality while adhering to evolving standards and patient rights.
Continuous review of regulatory changes, internal quality assurance, periodic audits for algorithm performance, adherence to clinical guidelines, and responsiveness to complaints are necessary to ensure AI systems remain compliant, fair, and effective in prior authorization and utilization management.