Understanding the Implications of SB 1120 for Health Plans: Ensuring Fair Application in AI Utilization Reviews

SB 1120 became law in California when Governor Gavin Newsom signed it in September 2024. It started on January 1, 2025. The law was supported by the California Medical Association, which represents about 50,000 doctors. Senator Josh Becker wrote the bill. It controls how health plans and disability insurance companies use AI in utilization review. Utilization review is the process insurers use to decide if certain healthcare services or treatments are needed.

The main goal of SB 1120 is to stop AI from making decisions that affect patient care by itself. For example, AI should not be the only thing that denies or delays coverage for a service without a human checking it. The law says only licensed healthcare workers can make the final decisions about medical needs. This means AI can help, but humans must make the final call based on each patient’s specific medical history and situation.

Senator Becker said AI can improve healthcare, but it cannot fully understand the details of a patient’s health needs. AI can show bias or make mistakes. This could cause wrongful denials, delays, or changes in care that might harm patients. SB 1120 was made to fix these problems by requiring real people to supervise AI decisions.

Key Requirements of SB 1120 for Health Plans and Insurers

Individualized Medical Review

The law says AI tools must base their decisions on each patient’s own medical history and condition. They can’t only use general group data. This keeps patients safe from wrong or unfair automatic decisions that don’t consider individual health details.

Human Oversight and Final Authority

Licensed doctors or qualified healthcare workers have the final say on whether a service is medically necessary. If a service is denied, delayed, or changed, a human expert must approve this decision. AI can help by giving data or analysis, but it cannot replace a person’s judgment.

Transparency and Patient Trust

Health plans and insurers must tell patients and providers if AI tools were used in making coverage decisions. Patients can ask for a human to review AI-based recommendations. This helps keep trust between patients and the healthcare system and stops fears about secret decisions.

Fair and Equitable Application

The bill requires that AI be used without unfair discrimination. AI must not treat patients differently because of race, disability, quality of life, or other protected traits. Some AI tools before have shown biases that caused unfair differences in healthcare. This law tries to stop that.

Compliance and Accountability

Health plans and insurers must have written policies about how they use AI in utilization reviews. They must review and update AI regularly to keep it accurate, fair, and following the rules. If they break the law on purpose, they can face criminal charges.

Studies showed some AI systems miss or under-identify Black patients who need extra care because they use biased data like cost as a stand-in. This example shows why controlling AI carefully is important to avoid increasing healthcare inequalities.

Impact on Medical Practice Administrators and IT Managers

SB 1120 brings some important changes for medical office leaders and IT staff:

  • Coordination with Health Plans: Offices need clear contact with insurers about how AI is used in utilization reviews. They should confirm that requests for approvals are checked by licensed clinicians, as the law says.
  • Data Documentation: Medical records must be detailed and accurate. AI tools depend on patient history and provider information. Offices should share clinical data quickly and fully to support fair AI decisions.
  • Patient Communication: Since the law demands transparency, medical offices may need to help patients understand when AI was used, and assist them in asking for human reviews if needed.
  • Compliance Monitoring: IT teams must keep electronic health records and health information exchanges secure and follow HIPAA rules. They must protect patient data during AI reviews.

These duties mean practice managers and IT staff should keep up with AI laws and work with payers to keep things running well and legal.

HIPAA-Compliant Voice AI Agents

SimboConnect AI Phone Agent encrypts every call end-to-end – zero compliance worries.

State and Federal Context: A Widening Regulatory Focus

SB 1120 is part of a bigger movement to regulate AI. California passed almost 19 AI-related laws in 2024 that focus on making AI use clear, fair, and safe in public areas.

Other states like Colorado and Utah have similar laws. Colorado’s SB 24-205 requires that when generative AI affects patient communication, it must be disclosed. It also demands yearly checks to reduce unfair bias in “high-risk” AI models.

At the national level, the Centers for Medicare and Medicaid Services (CMS) released a rule for 2025 that matches California’s law. CMS says AI used in Medicare Advantage prior authorizations must be based on patient-specific data instead of broad algorithms. CMS also wants clear information and individual patient care considered.

The National Association of Insurance Commissioners recommends insurers set strong rules, manage risks, and audit AI systems to avoid unfair treatment.

These laws show how important it is to balance AI’s speed and convenience with protecting patients from errors and bias. They also point to chances for health plans to carefully adjust their processes.

AI and Workflow Management in Medical Practices and Health Plans

Improving Efficiency While Maintaining Compliance

AI can help with complex work in health plans and medical offices. Utilization review takes a lot of time and is repetitive. AI can speed this up by analyzing data, marking cases for review, and making paperwork consistent.

For example, AI can:

  • Quickly check patient histories and compare requests with clinical rules.
  • Spot low-risk cases for fast approvals.
  • Alert human reviewers if cases are complex or if bias might happen.
  • Create needed reports and documents for audits.
  • Help schedule follow-ups by humans for AI decisions.

But the law warns not to rely on AI alone for final decisions. Automation should help doctors, not take over their judgment.

AI Call Assistant Knows Patient History

SimboConnect surfaces past interactions instantly – staff never ask for repeats.

Let’s Talk – Schedule Now

Practical Applications for Medical Office Administrators and IT Managers

In medical offices, AI tools like Simbo AI’s phone automation can lower the work on staff. These tools manage routine calls, appointments, and questions with accuracy. This helps offices run better.

While Simbo AI focuses on phone automation, it supports utilization review by:

  • Making sure communications from health plans and providers are on time and clear.
  • Giving patients options to talk to a real person when AI messages are sent, following laws like California’s AB 3030.
  • Improving workflow by sending calls and requests where they belong, freeing staff to handle complex work.

IT managers must set up these tools carefully to protect data and follow HIPAA and state rules. Being clear about AI use and letting patients reach humans helps keep their trust.

Maintaining Data Integrity and Security

Good data management is needed for AI to work safely and well. Information that AI uses for utilization review must be correct, current, and securely shared.

IT teams need to make sure:

  • AI tools connect well with current electronic health records and practice systems.
  • Patient data is protected by encryption and access controls.
  • There are records and audits to track AI decisions and follow rules.
  • Staff are trained on AI limits and how to pass tough cases to human experts.

Challenges and Administrative Considerations

Even though AI can handle many utilization review tasks (around 50-75%), SB 1120 adds new duties, like:

  • Making policies that support human review rules.
  • Clarifying contracts with AI vendors about following the law.
  • Checking AI systems often to find and fix errors or bias.
  • Training staff on how to understand AI results and manage requests to override AI decisions.
  • Increasing paperwork and reports sent to state regulators.

Experts say vague terms like “fair and equitable” can make it hard for health plans to follow the law clearly. Some warn these extra rules may cost more to run. But they can also help rebuild patient trust by stopping unfair AI decisions.

What Medical Practices Should Consider Next

  • Establish Strong Partnerships with Health Plans: Work closely with insurers to learn how AI is used, making sure real clinicians keep final control.
  • Ensure Accurate Documentation: Provide detailed and timely medical records to support fair AI decisions.
  • Educate Staff and Patients: Help practice staff know about AI roles and laws, and tell patients about their rights to human review and clear information.
  • Invest in Secure IT Infrastructure: Use safe health IT systems that keep patient data secure and support audits.
  • Monitor Legislative Developments: Watch for new AI laws that might change work processes or patient communication.

SB 1120 marks an important move toward regulated, fair, and patient-centered use of AI in health plan reviews. The law shows growing awareness that AI can help healthcare work better, but human clinical judgment is key to keep patients safe and treated fairly. Medical practice managers, owners, and IT teams need to know and follow these rules to keep their work legal and serve patients well in a healthcare environment that uses more AI.

Automate Medical Records Requests using Voice AI Agent

SimboConnect AI Phone Agent takes medical records requests from patients instantly.

Let’s Chat →

Frequently Asked Questions

What is the purpose of the new AI laws in California?

The new AI laws in California aim to establish guidelines for AI applications in clinical settings to ensure transparency, fairness in patient interactions, and protection against biases affecting care delivery.

What does AB 3030 require from healthcare providers?

AB 3030 mandates health care providers using generative AI to disclose that communications were produced using AI without medical review and to provide instructions for alternative communication methods.

When will AB 3030 take effect?

AB 3030 is set to take effect on January 1, 2025.

What are the implications of SB 1120 for health plans?

SB 1120 requires health plans using AI for utilization reviews to ensure compliance with fair application requirements and mandates that only licensed professionals evaluate clinical issues.

What kind of AI systems fall under Colorado’s SB 24-205?

SB 24-205 applies to ‘high-risk’ AI systems that affect consumer access to healthcare services and require developers to manage discrimination risks.

What must developers of high-risk AI models disclose?

Developers must disclose risk management measures, intended use, limitations, and conduct annual impact assessments on their models.

What obligations does Utah’s Artificial Intelligence Policy Act impose?

It requires individuals in regulated professions to disclose prominently when patients are interacting with GenAI content during service provision.

What role does the Office of Artificial Intelligence Policy play in Utah?

The Office of Artificial Intelligence Policy aims to promote AI innovation and develop future policies regarding AI utilization.

How do federal regulations currently impact AI usage in healthcare?

Federal regulations seek to categorize AI under existing nondiscrimination laws and require compliance with specific reporting and transparency standards.

What can healthcare organizations do to ensure compliance with new AI laws?

Organizations should implement governance frameworks to mitigate risks, monitor legislative developments, and adapt to evolving compliance requirements for AI usage.