SB 1047 was passed by the California Legislature to regulate the development of large AI models that might cause big problems if used wrongly. The bill calls these “covered models.” According to the law, covered models include AI systems trained with more than 1026 floating-point operations per second (FLOPS) and that cost over $100 million to train before January 1, 2027. The law also covers related models with lower but still large computing needs and costs.
Healthcare groups don’t build AI themselves but often use AI vendors and developers in California. California has 32 of the top 50 AI companies in the world. These companies provide tools for medical tests, scheduling patients, billing, and helping doctors make decisions. Many of these tools must follow the rules in SB 1047.
The bill requires developers to make sure their AI systems are safe and secure. This is especially important when the AI helps with things that could affect patient safety, privacy, or how hospitals run.
One main part of SB 1047 is that AI developers must use strong cybersecurity measures before and during the entire time the AI system is used. These measures stop unauthorized access, wrong use, and harm caused by AI.
Developers must set up:
These rules show that AI makers may have legal responsibility, especially in high-risk areas like healthcare.
Healthcare providers are using AI more for tasks like scheduling patients and answering phones. Some companies like Simbo AI make phone systems that reduce work for staff and improve patient communication.
SB 1047 means AI providers must make sure these systems work safely and do not risk patient data or cause interruptions. Even though the law focuses on big AI developers, healthcare managers must check if their AI suppliers follow these safety and security rules.
For example, a medical office using Simbo AI’s phone system must ensure the system protects patient details from being stolen or leaked. These details may include appointment times, personal information, or health records.
Choosing AI providers that follow SB 1047 helps medical offices lower risks and keep their operations running smoothly.
AI is changing how medical offices handle tasks like scheduling appointments, reminding patients, billing, and paperwork. AI tools such as Simbo AI help make these tasks easier and better for patients.
But using AI in healthcare needs careful attention to safety and cybersecurity. SB 1047 says AI developers must follow strict rules that affect medical offices using these tools.
By working closely with AI vendors that follow SB 1047, healthcare providers can use AI safely for tasks like phone answering without big cybersecurity risks.
SB 1047 sets clear legal rules to hold AI developers responsible. They must keep records of safety steps at every stage of AI development. They also need to do risk checks and set up strong cybersecurity measures.
The California Attorney General can enforce these rules by imposing fines, ordering fixes, or other actions. This puts pressure on developers to manage risks carefully.
The law also has:
These oversight steps help keep the public safe by making sure AI providers take security seriously and can be held accountable if they fail.
Even though SB 1047 targets AI developers, healthcare organizations using AI are affected too. Hospital leaders and IT managers should think about these points:
Governor Gavin Newsom did not approve SB 1047 because he worried that only size and cost of AI should not be the rule for regulations. He wants rules that consider how AI is used and the sensitivity of data instead of just model size.
Healthcare groups need to watch for new AI rules in California that balance new tech with safety. Hospital leaders and IT managers should get ready for stricter cybersecurity rules for AI suppliers. They should also expect more transparency about AI training data, risks, and compliance.
Other California AI laws like AB 2013 on AI data transparency and SB 942 on AI content labeling support cybersecurity rules. Together, they help build a safer AI environment.
Using these steps helps medical managers and IT teams handle AI safely and meet current and upcoming legal rules.
California’s SB 1047 law shows how important it is to develop AI safely when it affects public safety and key systems like healthcare. For healthcare providers using AI in front-office and clinical work, knowing and applying these cybersecurity rules is key to protecting patient data, keeping services working, and following laws. With good risk management, vendor checks, and staff training, healthcare groups can safely use AI tools that improve care while reducing cybersecurity risks.
The SB 1047 legislation aims to establish a safety and security regime for AI developers concerning models that may cause critical harms to public safety, following similar frameworks from the White House’s AI Executive Order.
A ‘covered model’ is defined as an AI model trained using over 10^26 FLOPS of computing power and valued at more than $100 million, with specific classifications for derivatives and fine-tuned models.
Critical harms include mass casualties or $500 million in damages resulting from AI models, particularly if they involve CBRN weapons, cyberattacks on infrastructure, or unsupervised illegal acts.
AI developers must report ‘AI safety incidents’ to the California Attorney General within 72 hours of discovering events that could increase critical harms.
Developers must implement administrative, technical, and physical cybersecurity measures to prevent unauthorized access, misuse, and ensure the ability for a full shutdown of the model.
Developers must perform critical harm assessments to evaluate risks, retain results, and ensure the model’s usage is safe before commercial deployment.
Developers must annually evaluate safety protocols, undergo third-party audits, submit compliance statements, and ensure whistleblower protections for employees reporting noncompliance.
Compliance will be enforced through annual third-party audits, regular reporting to the Attorney General, and a designated senior personnel responsible for adherence to safety protocols.
GovOps will issue new regulations regarding computational thresholds for covered models and provide guidance on preventing risks of critical harms by January 2027.
The valuation threshold aims to exclude small companies, potentially limiting the compliance burden of smaller developers while focusing requirements on larger entities capable of high-stakes AI deployments.