One key area of concern is the volume of waste, fraud, and inappropriate payments in claims submitted to government programs like Medicare. Prior authorization (PA) processes, used to check if medical services or equipment should be covered before they are delivered, help control these costs. But traditional prior authorization systems often slow down care, increase paperwork, and sometimes give mixed decisions. To fix this, new policy tests and private companies are using artificial intelligence (AI) combined with clinical review to improve the prior authorization process.
This article looks at how putting AI together with clinical checking in prior authorization tries to reduce wasteful spending and fraud. It also shows new things like the Wasteful and Inappropriate Service Reduction (WISeR) Model by the Centers for Medicare & Medicaid Services (CMS) and private companies like Cohere Health’s AI systems. The goal is to help medical practice managers and IT workers in the United States understand these new tools and how they affect healthcare organizations.
Many health plans require prior authorization before certain tests, procedures, or medical equipment can be given. The main goal is to make sure the service is needed and covered by the patient’s insurance. But many providers and practice managers say the current prior authorization process is slow, tough, and not very efficient. It can delay care, cause stress, create more paperwork, and sometimes deny or approve services wrongly.
Improving prior authorization is seen as important by recent government actions. U.S. Department of Health and Human Services (HHS) Secretary Robert F. Kennedy, Jr., and CMS Administrator Dr. Mehmet Oz have said the current system is “broken” because it often delays patient care. They want changes to make it faster, clearer, and better linked to clinical rules.
Starting January 1, 2026, CMS will begin a six-year test called the Wasteful and Inappropriate Service Reduction (WISeR) Model. This program focuses on certain low-value services that are often overused and may have fraud in Original Medicare. These services include skin and tissue substitutes, nerve stimulator implants, and knee arthroscopy for osteoarthritis. Studies show these services often have little benefit for Medicare patients.
The WISeR Model requires prior authorization or checks after service for providers in six states: New Jersey, Ohio, Oklahoma, Texas, Arizona, and Washington. CMS works with tech companies that use AI-enhanced prior authorization to review claims. These AI tools help clinical reviewers decide if coverage rules and medical data allow payment.
Senator Roger Marshall, who is a doctor, said providers find current prior authorization systems hard to deal with. He cautiously supports efforts like WISeR but wants clear details on cost and timing.
Besides government programs, companies like Cohere Health have made AI-powered platforms that automate prior authorization work. These platforms also include clinical decision support to make the process faster and more correct. They connect directly with Electronic Health Record (EHR) systems and give doctors and staff tools to send full and correct PA requests.
By helping clinical reviewers, these AI platforms want to lower doctor and staff burnout and let patients get needed services faster while keeping use rules in place.
Medical practices interested in AI-based prior authorization need to understand workflow automation and how it affects daily work. Workflow automation uses AI and software to handle repeated jobs like taking data, filling forms, sending claims, and tracking documents.
For practices in WISeR states, investing in smart automation tools that match CMS rules will be important to handle more administrative work without hurting patient care or budgets.
Even with AI’s benefits, worries remain about openness, fairness, and how work flows. Some legal cases have challenged prior authorization systems that rely on unclear AI rules and limit human review. This can lead to broad denials without proper checking.
Problems include breaking trust and legal risks if AI rules cause wrong denials or approvals. CMS’s WISeR Model tries to reduce some risks by allowing unlimited resubmissions, audits, and tying payments to quality, not denial numbers.
Still, providers worry about unclear timing for decisions, more work for checking claims after service, and payment setups that might push for denials.
Clear information about AI tools, how they decide, and human review levels is needed to build trust and protect patient care. AI should support doctor decisions, not replace them with strict automated rulings.
Following rules like CMS’s interoperability demands affects how prior authorization systems work. Technologies using FHIR APIs (Fast Healthcare Interoperability Resources) allow safe, real-time data sharing between providers’ systems and payers’ platforms. These rules make communication easier, lower repeated work, and make sure accurate documents are shared with everyone involved.
Providers in Medicare and Medicaid programs, especially in WISeR pilot states, will need to meet new technical rules for these interoperable systems. IT managers will have to work with tech vendors who can add AI-driven PA tools to existing EHRs and billing software.
For medical practice managers, owners, and IT staff, these changes mean prior authorization will become more automated and connected to clinical data. This affects staff, workflows, and patient scheduling. Practices need to get ready for:
At the same time, practices can expect longer-term benefits like fewer claim denials from mistakes, faster access to approved care, and cost savings by cutting unnecessary services.
The use of AI with human clinical review in prior authorization marks an important change in how the U.S. healthcare system controls waste and fraud in claims. Programs like CMS’s WISeR Model and private companies like Cohere Health show AI can support review and simplify administration without lowering clinical standards.
Medical practices across the country, especially those in test states, should prepare by adopting the right technology, training staff, and updating workflows. There are still challenges with openness, clinical checking, and workload, but AI-driven prior authorization aims to improve the quality of care decisions and how healthcare services are delivered to Medicare patients.
The WISeR Model is a six-year pilot program starting January 1, 2026, aimed at reducing Medicare spending on select low-value services by combining AI, machine learning, and clinical review to identify fraud, waste, and abuse in claims that historically did not require prior authorization.
The Model employs technology companies with AI expertise to evaluate prior authorization requests for select services, aiding medical reviewers in coverage decisions, while requiring clinician validation to balance automated assessments with human oversight.
WISeR focuses on items vulnerable to overuse or abuse, including skin and tissue substitutes, electrical nerve stimulator implants, and knee arthroscopy for osteoarthritis, chosen due to limited clinical benefit or higher waste risks.
There is uncertainty about the level and clarity of human clinical review accompanying AI decisions, with courts questioning automated denial processes. Transparency on decision logic and individualized assessment remains insufficient, raising legal and ethical issues.
Providers may face increased burdens due to mandatory prior authorizations or post-service reviews, without clarity on timing or support to manage these costs, potentially delaying payments and complicating patient communications.
Participants are compensated based on cost savings, process quality, provider and beneficiary experience, and clinical outcomes, with safeguards planned to monitor denial accuracy and prevent inappropriate behavior.
This could incentivize faster workflows that prioritize denials or shortcuts, possibly frustrating providers, delaying care, and undermining clinical appropriateness, despite CMS’s stated monitoring and corrective intentions.
By using AI combined with clinical validation to scrutinize claims for services with history of misuse, the Model aims to reduce wasteful and fraudulent payments, aligning with DOJ and HHS enforcement priorities.
Recent cases challenge algorithms issuing blanket denials without sufficient human review, highlighting fiduciary breaches and potential False Claims Act liability for flawed AI-based approvals or denials, emphasizing the need for transparency and oversight.
Questions persist about clinical review procedures, provider support for new requirements, integration with existing authorization standards like FHIR APIs, and how proprietary AI systems will allow appeal and transparency, creating stakeholder concerns.