Identifying and mitigating key risks associated with AI in healthcare including unlawful practice of medicine, inaccurate medical documentation, and discriminatory impacts on protected groups

In early 2025, California’s Attorney General, Rob Bonta, issued a legal advisory that sets rules for AI use in healthcare. The advisory stresses that AI must follow state laws about consumer protection, anti-discrimination, and patient privacy. Healthcare groups that create, sell, or use AI must make sure their AI does not practice medicine unlawfully, discriminate, or misuse patient data.

California’s rules include laws like the Unfair Competition Law (UCL), which stops false advertising and fake claims about AI. There are also licensing rules that stop AI from diagnosing or treating patients without a human doctor. Anti-discrimination laws protect vulnerable groups from AI bias. Privacy laws, like the Confidentiality of Medical Information Act (CMIA) and California Consumer Privacy Act (CCPA), protect patient data and require transparency about AI’s use.

Since California leads this effort, states like Texas, Colorado, Utah, and Massachusetts have made similar laws that focus on fairness, openness, and telling consumers about AI. Healthcare leaders must understand these growing laws to avoid lawsuits, fines, and loss of patient trust.

Risks of Unlawful Practice of Medicine by AI

One big risk is AI doing medical work that is not allowed. California law says only licensed healthcare workers can make diagnoses or treatment choices. AI should not replace or overrule these professionals but can help them. For example, AI chatbots or phone systems can help with scheduling or basic questions but should not give medical advice.

Unlawful practice happens if AI makes recommendations that look like professional medical opinions without a human checking them. This can lead to legal trouble. Medical managers should set up rules so AI must be reviewed by a licensed provider before patient care decisions are made or shared.

Challenges with Inaccurate AI-Generated Medical Documentation

Another problem is wrong or unclear medical documents made by AI. The California advisory warns about mistakes in patient notes or messages made by AI. Wrong documentation can cause bad care, billing problems, legal trouble, and put patient safety at risk.

Hospitals using AI for notes should regularly check and test their systems for errors. IT staff should make sure AI outputs are reviewed and matched with original data. Workers need training to spot AI mistakes or biased language, especially about race or disability.

Poorly watched AI note systems might add stereotypes or wrong facts. This can harm patients and break consumer and anti-discrimination laws. Being open about AI’s role in records is important to keep patient trust and follow rules.

Discriminatory Impacts on Protected Groups

The biggest social and legal risk with AI in healthcare is unfair treatment. AI learns from past data that may have biases based on money, race, ethnicity, or disability. If not fixed, AI can treat some groups unfairly.

For instance, AI might decide who gets certain services or insurance. This could unfairly stop minorities or disabled patients from getting care or add hurdles for them. California law bans such unfair effects and requires healthcare groups to find and fix AI bias.

Ways to comply include checking AI for bias during creation and use, auditing algorithms often, and using diverse data for AI training. Staff should learn about discrimination risks so they can spot and handle AI problems.

Transparency and Patient Communication: Maintaining Trust in AI Use

Being clear with patients is very important under California law and others. Providers must tell patients when AI uses their data or helps make care decisions. Patients have the right to know if their records help train or run AI tools.

This means giving clear notices about AI, getting consent when needed, and letting patients see how AI uses their data. These steps help patients make their own choices, prevent misunderstandings about AI, and protect healthcare groups from complaints.

AI and Automation in Front-Office Healthcare Workflows

AI is often used to automate front-office jobs like scheduling, answering patient questions, and registration. Companies like Simbo AI make AI phone systems that help run these tasks better.

These AI tools can handle many calls, send patient questions to the right place, and quickly answer common questions without taking all the staff’s time. Automating simple tasks cuts mistakes, lowers wait times, and lets staff work on harder problems.

But there are risks if AI gives wrong or incomplete answers, misunderstands patient needs, or does not alert humans about urgent issues. To lower these risks, healthcare groups should watch AI closely, have backup options to connect patients to people, and keep AI updated with current rules and medical facts.

Also, training front-office workers to work with AI helps them know AI’s limits and when to step in. This approach makes automation useful while keeping patient care safe and steady.

Risk Identification and Mitigation Strategies for Healthcare AI

  • Risk Identification: Healthcare leaders and IT staff must study AI structures, data sources, and decision methods. This includes knowing where training data comes from, tracking AI use, and spotting possible bias.
  • Regular Testing and Validation: Perform checks often to ensure AI outputs are correct, fair, and follow laws. Use in-house reviews and outside experts when possible.
  • Staff Training: Teach doctors, admin workers, and IT teams about what AI can do, legal limits, and how to watch over AI to stop mistakes or illegal acts.
  • Transparency Initiatives: Clearly tell patients about AI use, get their permission if needed, and make AI communication honest and easy to understand.
  • Ethical and Legal Compliance Frameworks: Make sure AI fits with licensing rules, anti-discrimination laws, and privacy rules to avoid breaking laws.

These steps help healthcare groups add AI tools safely while lowering risks like illegal practice, wrong documents, and bias.

The Role of Healthcare Lead Organizations and Experts

Law firms such as Wilson Sonsini watch AI laws change and help healthcare groups follow rules and cut risks. The U.S. Department of Health and Human Services’ Office for Civil Rights also stresses nondiscrimination in AI, showing federal focus on this issue.

Healthcare leaders should stay updated on these rules and work with lawyers and AI experts to keep AI use legal and up to date.

Frequently Asked Questions

What legal guidance did the California Attorney General issue regarding AI use in healthcare?

The California AG issued a legal advisory outlining obligations under state law for healthcare AI developers and users, addressing consumer protection, anti-discrimination, and patient privacy laws to ensure AI systems are lawful, safe, and nondiscriminatory.

What are the key risks posed by AI in healthcare as highlighted by the California Advisory?

The Advisory highlights risks including unlawful marketing, AI practicing medicine unlawfully, discrimination based on protected traits, improper use and disclosure of patient information, inaccuracies in AI-generated medical notes, and decisions causing disadvantaging of protected groups.

What steps should healthcare entities take to comply with California AI regulations?

Entities should implement risk identification and mitigation processes, conduct due diligence on AI development and data, regularly test and audit AI systems, train staff on proper AI usage, and maintain transparency with patients on AI data use and decision-making.

How does California law restrict AI practicing medicine?

California law mandates that only licensed human professionals may practice medicine. AI cannot independently make diagnoses or treatment decisions but may assist licensed providers who retain final authority, ensuring compliance with professional licensing laws and the corporate practice of medicine rules.

How do California’s anti-discrimination laws apply to healthcare AI?

AI systems must not cause disparate impact or discriminatory outcomes against protected groups. Healthcare entities must proactively prevent AI biases and stereotyping, ensuring equitable accuracy and avoiding the use of AI that perpetuates historical healthcare barriers or stereotypes.

What privacy laws in California govern the use of AI in healthcare?

Multiple laws apply, including the Confidentiality of Medical Information Act (CMIA), Genetic Privacy Information Act (GPIA), Patient Access to Health Records Act, Insurance Information and Privacy Protection Act (IIPPA), and the California Consumer Privacy Act (CCPA), all protecting patient data and requiring proper consent and data handling.

What is prohibited under California law regarding AI-generated patient communications?

Using AI to draft patient notes, communications, or medical orders containing false, misleading, or stereotypical information—especially related to race or other protected traits—is unlawful and violates anti-discrimination and consumer protection statutes.

How does the Advisory address transparency towards patients in AI use?

The Advisory requires healthcare providers to disclose if patient information is used to train AI and explain AI’s role in health decision-making to maintain patient autonomy and trust.

What recent or proposed California legislation addresses AI in healthcare?

New laws like SB 942 (AI detection tools), AB 3030 (disclosures for generative AI use), and AB 2013 (training data disclosures) regulate AI transparency and safety, while AB 489 aims to prevent AI-generated communications misleading patients to believe they are interacting with licensed providers.

How are other states regulating healthcare AI in comparison to California?

States including Texas, Utah, Colorado, and Massachusetts have enacted laws or taken enforcement actions focusing on AI transparency, consumer disclosures, governance, and accuracy, highlighting a growing multi-state effort to regulate AI safety and accountability beyond California’s detailed framework.