CMS announced the WISeR Model as a six-year pilot starting January 1, 2026. Its goal is to lower Medicare spending on services seen as low-value or often misused. These services include skin and tissue substitutes, electrical nerve stimulators, and knee arthroscopy for osteoarthritis. These procedures are often flagged for being used too much or fraudulently.
The WISeR Model uses AI and machine learning to help with prior authorization reviews. Instead of the old manual checks, this model uses specially trained tech companies skilled in AI-managed prior authorization workflows. These companies study authorization requests first, then clinical reviewers make the final decisions. This might speed up decisions and reduce wrong claims.
However, the WISeR Model also adds new rules. Providers in the six states must send prior authorization requests for services in the program. If they don’t, a mandatory review happens after the service but before payment. Denied requests do not stop treatment or claims completely but require providers to send requests again or file appeals as many times as needed.
Providers must handle more work under the WISeR pilot. Healthcare groups already deal with complex tasks like scheduling, paperwork, coding, and billing. Adding AI-driven prior authorization means extra tasks like filling forms, answering queries, resending requests, and appealing decisions.
Mandatory prior authorization creates new steps that many clinics, especially smaller ones, have not used before. Before WISeR, Medicare did not need prior approval for these services. Now the usual ways of working are changing. This may stretch administrative staff too thin and make patient care harder to coordinate.
No clear guidance exists on how long authorization decisions will take. CMS said decisions should not be slower than before but did not give exact rules. This unclear timing makes it hard for providers to plan schedules and finances.
Also, CMS does not say how providers will be paid for the extra work. Providers must spend time and money following the rules with no direct help. Denials, whether right or caused by AI errors, cause billing delays and cash flow problems. So, providers face more work and money worries.
The WISeR Model gives financial rewards to the AI tech companies based on Medicare savings from review activities. These payments change depending on quality, clinical results, and satisfaction from providers and patients.
Although these rewards aim to make the system work better, they might cause problems. Tech partners might focus on processing many requests fast, which could lead to more denials to save money. This fast pace can frustrate providers whose claims or treatment approvals are delayed or refused.
Legal issues already show worries about faulty AI algorithms making decisions without enough clinical review. Providers worry that AI decisions are not clear and that wrong denials hurt patient care. They also worry about possible legal risks if denials are unfair.
Providers also face uncertainty because AI is new in this process and little is known about how much human review happens. This makes it hard to predict effects on workflow, payments, and patient talks.
Tech companies in the WISeR Model use AI tools to help decide if a service is medically needed. They do this by studying clinical data and past usage patterns. The goal is to replace slow, error-prone manual reviews with faster systems using data.
Automatic decision support can shorten review times and spot fake or wrong service requests. AI models trained on big datasets can find when procedures do not follow accepted guidelines. They then flag these requests for closer look.
FHIR-based APIs (Fast Healthcare Interoperability Resources) are now often required for real-time prior authorization requests and answers in healthcare tech. Insurers in WISeR states have said they plan to expand these standards. But it is not clear if the tech partners in WISeR must meet these exact standards.
Even with these advances, current AI raises worries about transparency, how clearly decisions are explained, and how much clinicians are involved. Courts and legal experts say automatic denials without enough human review can break duties of care and may lead to legal trouble if wrong AI rules block needed treatment.
Practice administrators and IT managers must balance possible efficiency gains with more training, system linking, and handling appeals. Because the AI’s decision-making is often a “black box,” where how it works is secret, it is harder to find errors or argue for patients.
High-ranking officials, including U.S. Department of Health and Human Services Secretary Robert F. Kennedy, Jr., and CMS Administrator Dr. Mehmet Oz, have said prior authorization systems are “broken” and cause delays and denials for patients. The WISeR Model aims to reform these problems and cut waste, fraud, and abuse.
At the same time, healthcare groups express real worries about provider workloads and risks when automating prior authorization. Senator Roger Marshall, M.D., has noted the challenges and costs these systems bring. Legal firms like Reed Smith LLP warn about bad AI use causing wrong denials and call for clear oversight, openness, and clinical proof.
CMS plans to watch WISeR participants carefully, checking the accuracy, speed, and fairness of authorization choices. Adjustments to payments and audits will try to stop wrong denials and make system answers fair. Still, some details, like how much human clinical review goes with AI decisions, are not clear yet.
Healthcare providers in the six WISeR states must get ready for more prior authorization work caused by AI technology. Medical practice administrators need to rethink workflows, staffing, and billing to handle sending requests, resending, and appeals well.
IT managers have a big role in linking the needed tech, enabling real-time data sharing, and making sure systems work with AI tools and rules. They must expect to train and help clinical and admin staff as they adjust to automated decision-making.
The secret nature of WISeR’s AI tools makes it hard for providers to check and fight denials well. Providers should develop plans with clear records, strong communication with payers and tech partners, and keep updated about policy and technology changes.
Since the model might help reduce fraud and lower costs, providers should stay involved with CMS updates and feedback during the pilot period.
Providers and their admin teams in New Jersey, Ohio, Oklahoma, Texas, Arizona, and Washington must focus on preparation and flexibility as the WISeR model starts. Knowing how AI changes but also complicates prior authorization steps is key to handling future challenges in healthcare delivery management.
Though technology keeps improving, careful attention and teamwork among providers, payers, regulators, and tech firms are needed to balance cost control with proper care and patient needs.
Simbo AI works on front-office phone automation and answering services using artificial intelligence. It helps medical offices manage their communications and routine tasks. This support can ease the extra administrative work, like handling prior authorization questions and patient scheduling. Organizations using Simbo AI can better handle the added challenges from new healthcare rules and technology changes.
The WISeR Model is a six-year pilot program starting January 1, 2026, aimed at reducing Medicare spending on select low-value services by combining AI, machine learning, and clinical review to identify fraud, waste, and abuse in claims that historically did not require prior authorization.
The Model employs technology companies with AI expertise to evaluate prior authorization requests for select services, aiding medical reviewers in coverage decisions, while requiring clinician validation to balance automated assessments with human oversight.
WISeR focuses on items vulnerable to overuse or abuse, including skin and tissue substitutes, electrical nerve stimulator implants, and knee arthroscopy for osteoarthritis, chosen due to limited clinical benefit or higher waste risks.
There is uncertainty about the level and clarity of human clinical review accompanying AI decisions, with courts questioning automated denial processes. Transparency on decision logic and individualized assessment remains insufficient, raising legal and ethical issues.
Providers may face increased burdens due to mandatory prior authorizations or post-service reviews, without clarity on timing or support to manage these costs, potentially delaying payments and complicating patient communications.
Participants are compensated based on cost savings, process quality, provider and beneficiary experience, and clinical outcomes, with safeguards planned to monitor denial accuracy and prevent inappropriate behavior.
This could incentivize faster workflows that prioritize denials or shortcuts, possibly frustrating providers, delaying care, and undermining clinical appropriateness, despite CMS’s stated monitoring and corrective intentions.
By using AI combined with clinical validation to scrutinize claims for services with history of misuse, the Model aims to reduce wasteful and fraudulent payments, aligning with DOJ and HHS enforcement priorities.
Recent cases challenge algorithms issuing blanket denials without sufficient human review, highlighting fiduciary breaches and potential False Claims Act liability for flawed AI-based approvals or denials, emphasizing the need for transparency and oversight.
Questions persist about clinical review procedures, provider support for new requirements, integration with existing authorization standards like FHIR APIs, and how proprietary AI systems will allow appeal and transparency, creating stakeholder concerns.