Utilization Review (UR) and Prior Authorization (PA) processes are meant to make sure healthcare services are used appropriately and costs are managed. Usually, healthcare providers send clinical information to insurance companies. Then, insurers decide if a treatment or service is needed based on clinical rules and the patient’s condition.
Recently, health plans and insurance companies have begun using AI tools to help with these reviews. AI can look at patient records, clinical notes, and medical rules to flag cases for human check or suggest if treatments should be approved, denied, or need more checks. This can make decisions faster, reduce paperwork, and improve accuracy in routine cases.
Even with these benefits, there are worries about relying too much on AI. AI might miss some details about each patient or show biases from its training data. Because of this, rules now say that clinicians must oversee AI decisions to keep things fair and correct.
In the United States, both the federal and state governments have started making rules about how AI can be used in healthcare management. California is especially active in making clear laws.
These federal rules make sure AI supports human clinical judgment instead of replacing it.
California passed new laws about AI in healthcare. Senate Bill 1120 (SB 1120), starting January 1, 2025, controls how AI can be used by health care plans and disability insurers during utilization reviews. Some key points are:
These laws stress human control and openness while letting AI help.
Hospital leaders, physician group owners, and healthcare IT managers need to make many changes to follow these rules. The main effects are:
Healthcare groups must make sure AI-supported decisions are reviewed and finalized by physicians or qualified professionals. Doctors use their judgment to decide medical necessity. Workflows must be designed so AI suggestions go through a human check without slowing down patient care.
Organizations have to clearly tell patients when AI is used in decisions. Patients should know how to talk to a human if needed. This is important for those managing patient communication.
In California, AI-generated data is treated as personal information under the California Consumer Privacy Act (CCPA). This means healthcare groups must protect AI data like other health information, using strong safeguards to stop unauthorized access.
Health plans need to check that AI does not cause bias or unfair treatment. This means ongoing monitoring and updates to AI systems to follow federal and state anti-discrimination rules, especially for diverse patient groups.
Health plans and providers in California must keep detailed records about AI tools, training data, policies, and compliance efforts. Regulators will review this information. Teams must have good data systems ready for audits.
These laws limit AI from making final medical decisions but do not remove AI’s help in utilization reviews. Healthcare groups that use AI automation with strong clinical oversight can improve efficiency and still follow rules.
In utilization management, AI can quickly find patterns, pull important clinical information, prioritize urgent cases, and make first checks to help human reviewers. This lowers doctors’ workload so they can focus on tougher cases.
The CMS prior authorization API rule starting in 2027 requires payers to give PA decisions fast: 72 hours for urgent cases, 7 days for normal ones. AI can help meet these deadlines by automating routine checks, verifying documents, and making initial recommendations for humans.
AI tools working with EHR systems can pull out patient data during clinical work. This cuts delays from manual chart reviews and uses AI’s ability to sort large data quickly.
AI automation tools can send patient notices about utilization decisions and AI use. These systems add required disclaimers about AI and tell patients how to reach a human provider. This follows transparency rules like California’s AB 3030.
Using AI needs regular checks. AI models should be updated with current clinical data to avoid old or biased results. Organizations should have ongoing quality checks and feedback from clinicians to keep AI accurate and fair.
Following rules like SB 1120 brings challenges for healthcare administrators and tech leaders.
Automating reviews promises faster results and cost savings. But systems must clearly separate AI support from final human decisions to meet the rules.
Doctors have heavy workloads. To get quick reviews of AI advice, systems need easy interfaces, alerts, and maybe more staff to help manage utilization reviews well.
Because AI data counts as personal info, groups must strengthen policies for data security, encryption, access control, and breach response. Compliance teams must keep up with changing privacy laws.
Administrators should plan how to explain AI’s role simply to patients and providers to avoid confusion or distrust. Training staff on these explanations helps keep patient trust.
Records on AI use, clinical decisions, and policies must be clear and routine. IT systems should create audit trails automatically to lessen manual paperwork.
Limiting AI-driven medical necessity decisions to licensed physicians affects how utilization reviews are done in health insurance. Federal and state rules, led by California’s SB 1120, set clear standards. Human clinical judgment stays at the center of care decisions while AI helps as a support tool. Medical practice administrators, owners, and IT managers across the U.S. must balance following rules, working efficiently, protecting patient privacy, and being clear with patients. Using AI as a support tool in utilization reviews can improve administrative work without breaking laws or ethics. As AI changes, watching new rules and keeping strong human oversight will stay important to manage health insurance reviews well and responsibly.
California enacted three key laws regulating AI in healthcare: AB 3030 mandates disclaimers for AI use in patient communication; SB 1120 restricts final medical necessity decisions to physicians only, requiring disclosure when AI supports utilization reviews; and AB 1008 updates the CCPA to classify AI-generated data as personal information with consumer protections.
AB 3030 requires healthcare providers to include disclaimers when using generative AI in patient communications, informing patients about AI involvement and providing instructions to contact a human provider. It applies to hospitals, clinics, and physician offices using AI-generated clinical information, with enforcement by medical boards but no private right of action.
SB 1120 prohibits AI systems from making final decisions on medical necessity in health insurance utilization reviews. AI can assist but physicians must make final determinations. Health plans must disclose AI use in these processes. Noncompliance risks enforcement and penalties by the California Department of Managed Health Care.
AB 1008 clarifies AI-generated data is classified as personal information under the CCPA. Businesses must provide consumers with rights relating to any AI-generated personal data, ensuring protections equivalent to traditional personal data, including controls over processing and data breaches.
Healthcare AI agents must clearly disclose AI involvement in communications and provide ways for patients to contact a human provider, as per AB 3030. This transparency seeks to prevent confusion and build trust in AI tools used in care delivery.
By treating AI-generated data as personal information (AB 1008), enforcing disclosure of AI usage (AB 3030, SB 1120), and restricting AI’s autonomous decision-making capacity, California’s laws aim to protect patient privacy, ensure data security, and maintain human oversight over sensitive healthcare decisions.
Yes. Enforcement agencies include the Medical Board of California, Osteopathic Medical Board, Department of Managed Health Care, and the California Attorney General. Violations may lead to civil penalties and fines; however, these laws generally do not provide a private right of action for patients.
Hospitals must implement AI transparency protocols, ensuring disclaimers accompany AI communications. Developers must document training data (AB 2013) and comply with data privacy rules, while AI systems must be designed to support but not replace physician decisions, aligning technology use with regulatory mandates.
California’s comprehensive approach to AI oversight—including transparency mandates, privacy protections for AI data, and restrictions on AI decision authority—serves as a model likely to influence federal and other states’ policies, promoting ethical and responsible AI integration in healthcare.
Healthcare entities face ongoing challenges including adapting to frequent legislative updates, integrating compliance controls for AI disclosures, managing AI training data documentation, ensuring human oversight in AI decisions, and preparing for enforcement actions related to privacy breaches or nondisclosure of AI use.