The critical role of human clinical reviewers in ensuring fairness and compliance amid increasing use of AI for medical necessity determinations

Medical necessity determinations are decisions made by insurers and healthcare groups to check if a treatment or service is right for a patient based on medical evidence. These decisions affect whether patients can get care, how much it costs, and how services are used.

Recently, healthcare providers and payers have started using AI to help with utilization management and prior authorization processes. AI can look at lots of data, check if a treatment meets guidelines, and speed up responses to prior authorization requests. The Centers for Medicare & Medicaid Services (CMS) see this trend and have made policies to allow AI to help, but not replace, decisions made by licensed clinicians.

The CMS Interoperability and Prior Authorization final rule, which begins January 1, 2027, requires payers to use a Prior Authorization API to speed up decisions—within 72 hours for urgent cases and seven business days for standard ones. AI can help meet these deadlines, but humans must still review the decisions to keep them medically accurate and ethical.

National and State Regulations Governing AI Use in Utilization Management

Using AI in medical necessity decisions is watched closely by federal and state rules to protect patients, prevent unfairness, and make sure AI is used responsibly in healthcare.

Federal Regulations

  • The Medicare Program Contract Year 2024 Policy says Medicare Advantage cannot use AI alone to decide medical necessity. Each decision must consider the patient’s unique situation and follow privacy laws like HIPAA.
  • An Executive Order from October 30, 2023, tells the U.S. Department of Health and Human Services (HHS) to create policies that check AI tools in healthcare for safety, fairness, and to prevent bias and discrimination.

State-Level Legislation

  • California has strict laws starting in 2025. Assembly Bill 3030 says if generative AI is used in patient communication, it must clearly say so and tell patients how to contact human providers. Senate Bill 1120 stops insurers from denying or changing care based only on AI. It requires human clinical review by licensed providers to keep care safe and to protect doctors’ judgment.
  • Colorado’s 2023 law says AI systems that are “high-risk” must have impact assessments, must tell patients when AI is used, and let patients appeal decisions starting in 2026.
  • Illinois requires medical necessity denials from utilization review to be based on evidence that meets national standards like URAC or NCQA. These decisions need clinical peer review.
  • New York plans a law that would need AI algorithms used by insurers to be certified as fair and clear before they can be used.

Together, these laws show that AI should be a tool to help humans, not replace them. Human review is needed to keep decisions fair, clear, and reasonable.

Why Human Clinical Reviewers Remain Essential

AI can use data to help with decisions, but human clinical reviewers do work AI cannot. They look at individual patient situations, understand complex medical details, and make sure decisions follow laws and ethics.

Here are some reasons why human reviewers are still needed:

  • Clinical Judgment Beyond Data
    AI works by finding patterns in its training data. But it might miss complicated patient issues, other health problems, or social factors that affect medical needs. Human reviewers use their education and experience to notice exceptions AI might not catch.
  • Reducing Bias and Ensuring Fairness
    AI can learn biases from its training data, which can cause unfair decisions for some groups. For example, underserved groups may get more denials because of AI’s blind spots. Human reviewers find and fix these biases, helping keep care fair for all.
  • Following Legal and Ethical Rules
    Court cases like Kisting-Leung v. Cigna show the risks of letting AI deny claims without human review. Courts require decisions to be made in good faith and protect patients. Human review helps follow these rules and lowers legal risks.
  • Clear Reasons and Accountability
    Patients and doctors have the right to know how decisions are made. AI often cannot explain its choices clearly. Humans can read AI’s results and give clear reasons, which helps with appeals and trust.
  • Required by Law
    Many laws say licensed clinical professionals must be involved in review. For example, California’s SB 1120 makes it illegal to skip human review in AI-based decisions. CMS rules also require providers to take part to keep responsibility clear.

Compliance Challenges for Healthcare Organizations

Healthcare leaders and IT staff who work with AI in medical necessity face several challenges to follow the rules:

  • Following Different Rules
    They must follow federal rules and many different state rules, some of which are stricter, like California’s laws on privacy and human review.
  • Checking for Bias
    They need to test AI on data that represents all patients to find bias and mistakes. Regular checks and comparisons with human reviews are important.
  • Protecting Data Privacy
    Privacy laws like HIPAA, CCPA, and CMIA require careful handling of patient data in AI systems, including the data used to train AI models.
  • Managing Legal Risks
    Doctors and hospitals must clearly record how they use AI in decisions to avoid malpractice claims connected to AI recommendations.
  • Being Clear with Patients
    Providers must tell patients when AI is involved in decisions and get permission if AI affects care. Not doing this can lead to fines.

AI and Workflow Automation in Medical Necessity Determinations

AI is also used to automate work in medical offices, like patient communication, prior authorization requests, and follow-ups.

Prior Authorization Automation

The CMS Interoperability and Prior Authorization rule says payers must use a Prior Authorization API by 2027. This allows faster sharing of data with providers. AI can help process requests, check data, and give updates in real time.

Companies like Simbo AI use AI for phone automation and answering services. This helps reduce routine work in healthcare offices. AI can handle calls and follow-ups, lowering mistakes and saving staff time. This is important because responses must happen quickly – within 72 hours for urgent cases and 7 days for standard ones.

Combining AI with Human Workflow

Automation speeds up clerical tasks, but all medical necessity decisions still need human clinical review. Reviewers check AI’s findings or look closer at cases flagged by the system. This mix helps get things done faster without losing accuracy or fairness.

Benefits

  • Authorization steps work faster, cutting delays and helping patients.
  • Data collection and sharing with reviewers gets better.
  • Staff spend less time on phone calls because AI handles routine communication.
  • AI can gather needed documents or clear up questions before sending to clinical reviewers.

Risks and Monitoring

Even with automation, organizations need to watch out for:

  • Error rates in automated steps.
  • Complaints from patients and providers about AI processes.
  • Checking if AI results match human review decisions.
  • Making sure they meet deadlines and quality rules.

Regular checks keep automation fair and clear, and make sure human judgment stays key.

Multidisciplinary Governance and AI Oversight

Regulators suggest healthcare groups set up oversight teams with legal, clinical, technical, and ethics experts for AI use.

This oversight team:

  • Checks AI for bias and fairness.
  • Tests AI clinically and compares how well it works.
  • Sets up ways for patients to complain and appeal.
  • Trains staff about AI to improve teamwork between humans and machines.
  • Manages third-party vendors to make sure they follow contracts and audits.

This type of oversight helps avoid breaking rules and builds AI systems that match patient care goals.

The Role of Information Technology Managers and Healthcare Administrators

Medical practice managers and IT staff have many tasks in using AI while following rules:

  • Choose AI tools that are clear, easy to audit, and keep strong human review.
  • Stay updated on changing laws in key states like California and CMS rules.
  • Coordinate system connections to support prior authorization processes.
  • Teach staff about the legal and ethical parts of AI use.
  • Enforce data privacy and security rules based on HIPAA, CCPA, and CMIA.

They help build systems and rules that keep providers responsible while using AI to improve efficiency.

The use of AI in medical necessity decisions in the United States is increasing to help decision-making. But AI helps only; it does not make final decisions. Laws at all levels require that licensed human clinical reviewers take part. This protects patients, keeps decisions fair, avoids bias, and follows rules. Healthcare organizations and tech providers like Simbo AI that use AI for workflow automation and answering services must make their tools support licensed clinical judgment, not replace it. This way, they can meet rules and make healthcare work better both in the clinic and behind the scenes.

Frequently Asked Questions

What recent federal regulation governs the use of AI in healthcare prior authorization (PA) and utilization management (UM)?

The Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program final rule issued by CMS mandates that Medicare Advantage organizations ensure medical necessity determinations consider the specific individual’s circumstances and comply with HIPAA. AI can assist but cannot solely determine medical necessity, ensuring fairness and mechanisms to contest AI decisions.

What are the key requirements of the Interoperability and Prior Authorization final rule by CMS?

Effective by January 1, 2027, this rule requires payers to implement a Prior Authorization Application Programming Interface (API) to streamline the PA process. Decisions must be sent within 72 hours for urgent requests and seven days for standard requests. AI may be deployed to comply with timing but providers must remain involved in decision-making.

How does the Executive Order on AI affect healthcare AI deployment?

Signed on October 30, 2023, it mandates HHS to develop policies and regulatory actions for AI use in healthcare, including predictive and generative AI in healthcare delivery, financing, and patient experience. It also calls for AI assurance policies to enable evaluation and oversight of AI healthcare tools.

What are some state-level regulations impacting AI use in UM/PA?

Examples include Colorado’s 2023 act requiring impact assessments and anti-discrimination measures for AI systems used in healthcare decisions; California’s AB 3030 requiring patient consent for AI use and Senate Bill 1120 mandating human review of UM decisions; Illinois’ H2472 requiring clinical peer review of adverse determinations and evidence-based criteria; and pending New York legislation requiring insurance disclosures and algorithm certification.

What are the compliance challenges for managed care plans using AI in PA/UM?

Plans must navigate varying state and federal regulations, ensure AI systems do not result in discrimination, guarantee that clinical reviewers oversee adverse decisions, maintain transparency about AI use, and implement mechanisms for reviewing and contesting AI-generated determinations to remain compliant across jurisdictions.

What role must human clinical reviewers play according to recent regulations?

Regulations emphasize that qualified human clinical reviewers must oversee and validate adverse decisions related to medical necessity to prevent sole reliance on AI algorithms, assuring fairness, accuracy, and compliance with legal standards in UM/PA processes.

How should AI-driven PA/UM systems be tested before and after implementation?

AI systems must be tested on representative datasets to avoid bias and inaccuracies, with side-by-side comparisons to clinical reviewer decisions. After deployment, continuous monitoring of decision accuracy, timeliness, patient/provider complaints, and effectiveness is critical to detect and correct weaknesses.

What transparency measures are recommended regarding AI use in prior authorization?

Insurers and healthcare providers should disclose AI involvement in decisions to patients and providers, including how AI contributed to decisions, ensuring individuals are informed and entitled to appeal AI-generated determinations, promoting trust and accountability.

How can collaboration improve AI deployment in UM/PA?

Engagement with regulators, healthcare providers, patient groups, and technology experts helps navigate regulatory complexities, develop ethical best practices, and foster trust, ensuring AI in UM/PA improves decision quality while adhering to evolving standards and patient rights.

What ongoing monitoring is suggested to maintain AI compliance in healthcare PA/UM?

Continuous review of regulatory changes, internal quality assurance, periodic audits for algorithm performance, adherence to clinical guidelines, and responsiveness to complaints are necessary to ensure AI systems remain compliant, fair, and effective in prior authorization and utilization management.