SB-1120 is a law in California that will start in 2025. It is made to keep patients safe and make sure AI tools used in healthcare are clear and fair. The law says that licensed doctors must watch over AI programs that help with healthcare decisions. This is especially important for health plans and disability insurers.
Many healthcare workers know that AI can help by doing simple tasks like scheduling appointments or processing claims. But when AI is used to understand medical data or suggest treatments, the risks go up because it can affect patients’ health. SB-1120 requires doctors to supervise AI to keep patients safe from mistakes or unfair use.
Licensed oversight means AI tools have to be open about how they work, protect privacy, be fair, and not discriminate. This helps stop problems like biased choices, leaks of patient information, or wrong use of health data.
SB-1120 focuses on making AI use clear. Health plans and insurers using AI have to make sure the decisions AI helps with can be understood by the doctors who oversee them. Doctors must check that AI decisions follow laws that protect consumers and civil rights.
The law works with other California laws, like the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA). These laws protect personal and sensitive data, including data made by AI.
SB-1120 also works with Assembly Bill 3030 (AB-3030). This law says healthcare providers have to tell patients when AI is used in their care, especially when AI writes or shares clinical information. This helps patients know when AI is involved and keeps trust between them and their doctors.
For people managing healthcare organizations and IT, following SB-1120 means making sure AI tools have doctor oversight. AI software can’t work alone without responsibility. This may mean changing the way work is done, training staff, and improving how results are reported.
SB-1120 says licensed doctors must watch over AI systems used in healthcare decisions. They must check AI recommendations or outputs to make sure they fit medical standards and ethical rules.
This rule means AI should help, not replace human judgement. People running medical offices should understand that billing or claims decisions influenced by AI must be reviewed by doctors who know how to interpret AI results.
Doctors’ oversight also helps prevent bias in AI. Sometimes, AI systems learn from data that is not fair and can treat some patients unfairly. Doctors catch and fix these problems before they harm patients.
If AI is used to decide if a treatment is needed, doctors must confirm these decisions. This is a required step that makes care safer and more responsible.
Using AI in healthcare raises big privacy questions. Health data is very private, and if it is not handled properly, it can lead to identity theft or unfair treatment.
California has updated its privacy laws to cover AI data with laws like AB-1008. These laws say AI systems that handle personal information must follow rules to keep that info private, accurate, and safe.
The California Privacy Protection Agency (CPPA) watches over these rules. They make sure AI companies and health providers do what the law says.
Healthcare managers and IT staff should have strong rules for data when they use AI. This includes:
Even though SB-1120 and related laws only apply in California, their effects can reach other states. California’s rules might be copied by other states or the federal government when they make their own AI laws.
Healthcare providers all over the U.S. should think about California’s rules, especially if they work with patients or insurers in California. Not following these rules could cause legal trouble, fines, or lose patients’ trust.
Medical office owners and managers in all states should watch for new AI laws and change how they use AI to include doctor oversight, transparency, and privacy protection.
With SB-1120 and California’s AI rules, AI can help automate many healthcare office jobs. For medical managers and IT teams, AI can make tasks like:
Companies like Simbo AI provide AI services for phone answering and managing calls. These help make communication faster by handling many calls, answering common questions, and sending patients to the right person.
But AI that helps with clinical advice or patient data must follow the law. SB-1120 says doctors have to oversee this AI. Also, organizations must tell patients when AI is used, following AB-3030.
To use AI in workflows well, healthcare IT managers should:
Though rules are strict, AI helps reduce paperwork and lets staff focus on caring for patients. It can also make patients happier by offering quicker answers and shorter wait times.
Several California groups help regulate AI in healthcare:
Healthcare managers and IT workers should keep up with updates and rules from these groups as AI systems change.
Medical office leaders and IT teams should take steps to follow the new AI rules:
California’s AI healthcare laws are some of the most detailed in the U.S. The state passed 18 AI laws starting in 2025. This shows a plan to handle AI challenges while letting technology grow responsibly.
State leaders, including Governor Gavin Newsom, know AI can cause problems like privacy risks and safety issues, especially in healthcare. Their laws balance protecting patients with letting AI develop in a careful way.
National AI rules are still being made. But California’s laws offer a good example for healthcare groups wanting to use AI safely and fairly.
By following SB-1120 and related laws, medical office managers, owners, and IT staff can use AI tools to improve healthcare without risking patient safety or privacy. Requiring doctor oversight helps build trust in AI decisions and supports responsible technology use in American healthcare.
AB-3030 requires healthcare providers to disclose when they use generative AI to communicate with patients, particularly regarding messages that contain clinical information. This aims to enhance transparency and protect patient rights during AI interactions.
SB-1120 establishes limits on how healthcare providers and insurers can automate services, ensuring that licensed physicians oversee the use of AI tools. This legislation aims to ensure proper oversight and patient safety.
AB-1008 expands California’s privacy laws to include generative AI systems, stipulating that businesses must adhere to privacy restrictions if their AI systems expose personal information, thereby ensuring accountability in data handling.
AB-2013 mandates that AI companies disclose detailed information about the datasets used to train their models, including data sources, usage, data points, and the collection time period, enhancing accountability for AI systems.
SB-942 requires widely used generative AI systems to include provenance data in their metadata, indicating when content is AI-generated. This is aimed at increasing public awareness and ability to identify AI-generated materials.
SB-896 mandates a risk analysis by California’s Office of Emergency Services regarding generative AI’s dangers, in collaboration with leading AI companies. This aims to evaluate potential threats to critical infrastructure and public safety.
California enacted laws, such as AB-1831, that extend existing child pornography laws to include AI-generated content and make it illegal to blackmail individuals using AI-generated nudes, aiming to protect rights and enhance accountability.
AB-2885 provides a formal definition of AI in California law, establishing a clearer framework for regulation by defining AI as an engineered system capable of generating outputs based on its inputs.
Businesses interacting with California residents must comply with the new AI laws, especially around privacy and AI communications. Compliance measures will be essential as other states may adopt similar regulations.
The legislation aims to balance the opportunities AI presents with potential risks across various sectors, including healthcare, privacy, and public safety, reflecting a proactive approach to regulate AI effectively.