Several states have made or given rules that affect AI systems in healthcare. California has the most detailed rules, but states like Oregon, Massachusetts, New Jersey, Texas, and Colorado also have important laws. These laws control how personal health information (PHI) and other private data can be collected, used, and shared when AI is involved.
California’s rules for healthcare AI are some of the most complete in the country. On January 13, 2025, the California Attorney General gave legal advice about how healthcare providers, insurers, and AI developers must use AI. The advice says AI systems must follow laws about consumer protection, anti-discrimination, and patient privacy. These include:
The advice also says AI systems must be tested regularly to make sure they work safely and follow the law. Patients must be told if AI is being used in their care or to make decisions. They also have to know if their information is used to train AI systems.
New laws like SB 942, AB 2013, and SB 1120 add more rules. They require AI training data to be disclosed and say licensed doctors must supervise AI in healthcare decisions. These laws stop AI from acting without permission and help avoid bias or unfair treatment in AI results.
Oregon, Massachusetts, and New Jersey have made their own rules about AI transparency, data protection, consent, and stopping discrimination:
Other states like Texas and Colorado want more AI transparency and hold companies responsible for AI’s effects on people. This shows many states are watching AI more closely.
Privacy and consent are very important in state laws about AI in healthcare. Healthcare organizations must make sure AI does not use or share patient data without permission. The rules for legal AI use include:
Many state laws say patients must give clear, active permission to collect, use, or share personal and sensitive data with AI. Patients need to know:
For example, under California’s CCPA and Oregon’s OCPA, passive consent (like not saying no) is not enough. Healthcare providers must clearly explain the AI’s role and get patient permission before using their data. This is especially true for sensitive data like genetic or biometric info.
Another key rule is data minimization. AI should only use data that is needed for healthcare. Collecting extra or unrelated data might break laws. Healthcare workers and AI developers must check their data use carefully.
AI systems in healthcare must keep data safe from hacks or leaks. Health data is very private. State laws require organizations to:
Healthcare groups must also follow federal laws like HIPAA along with state laws. Some state laws are stricter, so groups must make sure they follow all rules.
Using AI fairly means stopping bias. AI trained on bad or incomplete data may treat people unfairly because of race, gender, disability, or other reasons.
States enforce laws like California’s Unruh Civil Rights Act and New Jersey’s Law Against Discrimination to stop AI from making unfair decisions in healthcare, insurance, or patient communication. AI must be tested and checked regularly to avoid these problems.
AI in healthcare does more than diagnosis and treatment. It is also used to automate tasks like appointment scheduling, answering phones, patient messages, and billing. Some companies use AI to help with front-office work.
These AI automation tools can make work easier and help patients. But they also bring up privacy concerns.
When AI automates patient interactions or collects data by phone, medical offices must:
AI tools in healthcare front desks must follow state and federal privacy rules:
If done right, AI automation can cut down work without risking privacy or legal problems.
Administrators and IT managers in healthcare must make sure AI tools follow laws. They need to:
Legal experts say risk checks, written security plans, and vendor management help manage challenges from many state laws and AI rules. Practices that don’t follow these laws could face fines, lawsuits, lose patient trust, and have trouble running smoothly.
AI in healthcare can help improve outcomes, lower workloads, and increase patient involvement. But state privacy laws mean this technology must be used carefully, clearly, and with respect for patient rights.
California’s strong rules guide other states and healthcare providers on how to manage AI responsibly. They focus on consent, fairness, and data safety.
Healthcare managers and IT staff must keep up with changing laws, watch privacy practices closely, and make policies that protect AI use. Careful management helps AI support healthcare without breaking privacy rules or causing legal issues.
By making following rules a priority, healthcare providers can safely use AI tools like front-office automation while protecting patient data and keeping public trust.
The California AG issued a legal advisory outlining obligations under state law for healthcare AI developers and users, addressing consumer protection, anti-discrimination, and patient privacy laws to ensure AI systems are lawful, safe, and nondiscriminatory.
The Advisory highlights risks including unlawful marketing, AI practicing medicine unlawfully, discrimination based on protected traits, improper use and disclosure of patient information, inaccuracies in AI-generated medical notes, and decisions causing disadvantaging of protected groups.
Entities should implement risk identification and mitigation processes, conduct due diligence on AI development and data, regularly test and audit AI systems, train staff on proper AI usage, and maintain transparency with patients on AI data use and decision-making.
California law mandates that only licensed human professionals may practice medicine. AI cannot independently make diagnoses or treatment decisions but may assist licensed providers who retain final authority, ensuring compliance with professional licensing laws and the corporate practice of medicine rules.
AI systems must not cause disparate impact or discriminatory outcomes against protected groups. Healthcare entities must proactively prevent AI biases and stereotyping, ensuring equitable accuracy and avoiding the use of AI that perpetuates historical healthcare barriers or stereotypes.
Multiple laws apply, including the Confidentiality of Medical Information Act (CMIA), Genetic Privacy Information Act (GPIA), Patient Access to Health Records Act, Insurance Information and Privacy Protection Act (IIPPA), and the California Consumer Privacy Act (CCPA), all protecting patient data and requiring proper consent and data handling.
Using AI to draft patient notes, communications, or medical orders containing false, misleading, or stereotypical information—especially related to race or other protected traits—is unlawful and violates anti-discrimination and consumer protection statutes.
The Advisory requires healthcare providers to disclose if patient information is used to train AI and explain AI’s role in health decision-making to maintain patient autonomy and trust.
New laws like SB 942 (AI detection tools), AB 3030 (disclosures for generative AI use), and AB 2013 (training data disclosures) regulate AI transparency and safety, while AB 489 aims to prevent AI-generated communications misleading patients to believe they are interacting with licensed providers.
States including Texas, Utah, Colorado, and Massachusetts have enacted laws or taken enforcement actions focusing on AI transparency, consumer disclosures, governance, and accuracy, highlighting a growing multi-state effort to regulate AI safety and accountability beyond California’s detailed framework.