SB 1120 became law in California when Governor Gavin Newsom signed it in September 2024. It started on January 1, 2025. The law was supported by the California Medical Association, which represents about 50,000 doctors. Senator Josh Becker wrote the bill. It controls how health plans and disability insurance companies use AI in utilization review. Utilization review is the process insurers use to decide if certain healthcare services or treatments are needed.
The main goal of SB 1120 is to stop AI from making decisions that affect patient care by itself. For example, AI should not be the only thing that denies or delays coverage for a service without a human checking it. The law says only licensed healthcare workers can make the final decisions about medical needs. This means AI can help, but humans must make the final call based on each patient’s specific medical history and situation.
Senator Becker said AI can improve healthcare, but it cannot fully understand the details of a patient’s health needs. AI can show bias or make mistakes. This could cause wrongful denials, delays, or changes in care that might harm patients. SB 1120 was made to fix these problems by requiring real people to supervise AI decisions.
The law says AI tools must base their decisions on each patient’s own medical history and condition. They can’t only use general group data. This keeps patients safe from wrong or unfair automatic decisions that don’t consider individual health details.
Licensed doctors or qualified healthcare workers have the final say on whether a service is medically necessary. If a service is denied, delayed, or changed, a human expert must approve this decision. AI can help by giving data or analysis, but it cannot replace a person’s judgment.
Health plans and insurers must tell patients and providers if AI tools were used in making coverage decisions. Patients can ask for a human to review AI-based recommendations. This helps keep trust between patients and the healthcare system and stops fears about secret decisions.
The bill requires that AI be used without unfair discrimination. AI must not treat patients differently because of race, disability, quality of life, or other protected traits. Some AI tools before have shown biases that caused unfair differences in healthcare. This law tries to stop that.
Health plans and insurers must have written policies about how they use AI in utilization reviews. They must review and update AI regularly to keep it accurate, fair, and following the rules. If they break the law on purpose, they can face criminal charges.
Studies showed some AI systems miss or under-identify Black patients who need extra care because they use biased data like cost as a stand-in. This example shows why controlling AI carefully is important to avoid increasing healthcare inequalities.
SB 1120 brings some important changes for medical office leaders and IT staff:
These duties mean practice managers and IT staff should keep up with AI laws and work with payers to keep things running well and legal.
SB 1120 is part of a bigger movement to regulate AI. California passed almost 19 AI-related laws in 2024 that focus on making AI use clear, fair, and safe in public areas.
Other states like Colorado and Utah have similar laws. Colorado’s SB 24-205 requires that when generative AI affects patient communication, it must be disclosed. It also demands yearly checks to reduce unfair bias in “high-risk” AI models.
At the national level, the Centers for Medicare and Medicaid Services (CMS) released a rule for 2025 that matches California’s law. CMS says AI used in Medicare Advantage prior authorizations must be based on patient-specific data instead of broad algorithms. CMS also wants clear information and individual patient care considered.
The National Association of Insurance Commissioners recommends insurers set strong rules, manage risks, and audit AI systems to avoid unfair treatment.
These laws show how important it is to balance AI’s speed and convenience with protecting patients from errors and bias. They also point to chances for health plans to carefully adjust their processes.
AI can help with complex work in health plans and medical offices. Utilization review takes a lot of time and is repetitive. AI can speed this up by analyzing data, marking cases for review, and making paperwork consistent.
For example, AI can:
But the law warns not to rely on AI alone for final decisions. Automation should help doctors, not take over their judgment.
In medical offices, AI tools like Simbo AI’s phone automation can lower the work on staff. These tools manage routine calls, appointments, and questions with accuracy. This helps offices run better.
While Simbo AI focuses on phone automation, it supports utilization review by:
IT managers must set up these tools carefully to protect data and follow HIPAA and state rules. Being clear about AI use and letting patients reach humans helps keep their trust.
Good data management is needed for AI to work safely and well. Information that AI uses for utilization review must be correct, current, and securely shared.
IT teams need to make sure:
Even though AI can handle many utilization review tasks (around 50-75%), SB 1120 adds new duties, like:
Experts say vague terms like “fair and equitable” can make it hard for health plans to follow the law clearly. Some warn these extra rules may cost more to run. But they can also help rebuild patient trust by stopping unfair AI decisions.
SB 1120 marks an important move toward regulated, fair, and patient-centered use of AI in health plan reviews. The law shows growing awareness that AI can help healthcare work better, but human clinical judgment is key to keep patients safe and treated fairly. Medical practice managers, owners, and IT teams need to know and follow these rules to keep their work legal and serve patients well in a healthcare environment that uses more AI.
The new AI laws in California aim to establish guidelines for AI applications in clinical settings to ensure transparency, fairness in patient interactions, and protection against biases affecting care delivery.
AB 3030 mandates health care providers using generative AI to disclose that communications were produced using AI without medical review and to provide instructions for alternative communication methods.
AB 3030 is set to take effect on January 1, 2025.
SB 1120 requires health plans using AI for utilization reviews to ensure compliance with fair application requirements and mandates that only licensed professionals evaluate clinical issues.
SB 24-205 applies to ‘high-risk’ AI systems that affect consumer access to healthcare services and require developers to manage discrimination risks.
Developers must disclose risk management measures, intended use, limitations, and conduct annual impact assessments on their models.
It requires individuals in regulated professions to disclose prominently when patients are interacting with GenAI content during service provision.
The Office of Artificial Intelligence Policy aims to promote AI innovation and develop future policies regarding AI utilization.
Federal regulations seek to categorize AI under existing nondiscrimination laws and require compliance with specific reporting and transparency standards.
Organizations should implement governance frameworks to mitigate risks, monitor legislative developments, and adapt to evolving compliance requirements for AI usage.