In recent years, AI’s role in healthcare has grown fast. More than 100 AI-related healthcare bills were introduced in different U.S. states in 2024. Jared Augenstein, a senior managing director at Manatt Health, said these bills focus on transparency, regulation of payer use, stopping discrimination, and oversight of clinical decisions. By early 2025, over a dozen more bills were introduced in states like New York, Texas, and Illinois.
California, Colorado, and Utah have passed laws that affect AI use in healthcare. For example:
These laws show a national move toward safe and fair AI use. They focus on keeping patients informed and protecting their rights.
One big worry from the American Medical Association (AMA) and healthcare experts is AI systems making decisions that limit or deny patient care. If these decisions happen without doctors watching, important medical details could be missed.
The AMA’s updated policy says any AI recommendation that denies medically needed care must be checked by a licensed doctor who knows that medical field before it is final. This check is important for several reasons:
Michael Suk, MD, Chair of the AMA Board of Trustees, points out that doctors must take part in AI development and policy talks. Making sure AI helps doctors, not replace them, keeps AI use ethical and medically sound.
Healthcare has high standards for new technology because decisions affect patient lives. Justin Norden, MD, founder and CEO of Qualified Health, says the healthcare field is not ready for how fast AI is changing. AI updates often, unlike usual medical tools which get slow and steady approvals.
This fast AI change brings some challenges:
The AMA supports careful but steady AI adoption. By first using AI in office tasks and support, healthcare groups can gather proof, train workers, and make rules that lower risks when using AI for clinical decisions.
Using AI automation in healthcare can help a lot with office tasks and busy front desks. Companies like Simbo AI offer AI phone automation and answering services for medical offices. These examples show how AI can help healthcare work smoothly and safely.
AI automation in front-office jobs can:
But AI should never replace doctors’ decisions in clinical care. Healthcare leaders should make sure AI systems in workflows send clinical questions or care denials to qualified doctors to decide.
Good AI plans include:
IT managers and administrators need to work closely with clinical staff to set policies for AI use. This lets AI help care without causing problems.
States like California, Colorado, and Utah show growing efforts to make AI use in healthcare transparent. Laws demand that patients and doctors are told when AI is part of clinical or office work.
Transparency helps with:
The AMA wants human supervision with open AI use so care stays fair and safe. Medical groups must watch new laws to stay legal and protect patients.
AI is growing fast in healthcare, so doctors need to join policy and governance talks. They bring clinical knowledge and patient views that tech makers, administrators, and lawmakers might miss.
Having doctors involved helps:
The AMA suggests involving doctors in AI policy and decisions. Healthcare leaders should focus on these partnerships to build responsible AI use.
For healthcare office leaders, owners, and IT managers in the U.S., the message is clear: AI can help healthcare if used carefully. But doctor oversight is needed when clinical choices are involved.
Key steps include:
Following these steps helps U.S. healthcare offices use AI’s power while protecting patient rights, care quality, and managing legal rules.
State legislatures are actively introducing bills regulating AI in health care, focusing on transparency, regulation of payer use, discrimination prevention, and clinical decision-making oversight, reflecting the rapid legislative response to balance innovation with patient protections.
Transparency ensures that patients and healthcare providers are aware when AI tools are used, particularly in decision-making processes, allowing for accountability, informed consent, and safeguarding against misuse or over-reliance on automated systems without human oversight.
Physicians must oversee AI-generated recommendations, especially those limiting or denying care. Any AI decision should be reviewed by a licensed physician in the relevant specialty before final determinations to ensure individual patient needs are considered.
California mandates disclosure of generative AI use by physicians and organizations; Colorado imposes significant requirements on AI tool developers in high-risk situations; Utah requires disclosure when generative AI is used in regulated professions, including healthcare, emphasizing consumer protections.
The AMA worries AI may increase denials of medically necessary care, cause delays, and create access barriers by automating decisions without nuanced understanding of individual patient conditions, threatening quality and equity in healthcare delivery.
Healthcare is unaccustomed to the fast pace of AI changes, unlike traditional medical tools approved once for long use. This rapid change demands continuous adaptation and governance, complicating safe, effective implementation in clinical settings.
The AMA envisions AI as a tool that enhances patient experience and clinical outcomes, supporting physicians rather than burdening them, ensuring technology aligns with medical standards and ethical care delivery.
Automated denials should be automatically referred for review by a qualified physician who can assess medical necessity considering each patient’s unique circumstances before any final decision.
Organizations should start by deploying AI for low-risk tasks like claims processing and quality reporting, allowing observation of AI behavior in less critical areas before expanding its clinical use.
Inclusion of physicians ensures AI development and use maintains clinical relevance, addresses patient safety concerns, and balances technological innovation with ethical, individualized patient care requirements.