According to the American Medical Association (AMA), many doctors in the U.S. have started using AI more in recent years. In 2023, about 38% of doctors said they used some type of AI tool. By 2024, this number grew to 66%, showing more interest in digital health. Also, 68% of doctors see some benefit from using AI in their work.
The AMA calls AI in healthcare “augmented intelligence.” This means AI helps doctors make decisions but does not take away the important role of human judgment in patient care.
Even with more doctors using AI and positive views about it, there are still key problems. Doctors and managers are worried about how clear AI decisions are, the proof that AI works well, privacy of data, cybersecurity, and especially who is responsible if something goes wrong. These problems must be solved before AI can be used widely and safely.
A big problem is that there are no clear rules about who is responsible when AI tools are used. Doctors are responsible for their patients’ care. But when AI tools are part of diagnosing or treatment, it is not clear who is liable if AI leads to mistakes or harm.
The AMA has started working on these problems. It wants clear rules to show who is responsible and accountable when doctors use AI. Doctors need to understand their role and legal risks while using AI. Also, AI systems should be made so they help with safe clinical decisions.
Some questions about liability include:
AI errors can come from bias in the AI, poor data, or hacking. Rules about liability must cover these issues. Without clear rules, doctors may be afraid to use AI and miss out on its benefits.
Clear rules are needed to build trust, keep patients safe, and encourage good use of AI. The AMA points out these key needs:
These parts are needed for regulators, health groups, and AI makers to move forward safely.
AI is also helpful for doing many office and admin jobs automatically. Many clinics have problems like long waits, missed calls, scheduling troubles, and tired staff. These problems can affect how patients are cared for.
Companies like Simbo AI make AI tools that answer phone calls and schedule appointments by themselves. These systems use language processing and learning AI to manage patient calls and messages, even when staff are busy or off work.
Some benefits of automation are:
However, these AI tools also raise questions about who is responsible. For example, if AI messes up an appointment request, how does the clinic document it? How do they make sure AI gives right and safe information?
To deal with these, clinics should:
Using AI automation carefully can help clinics run better and lower risks.
Even though many use AI now, many health workers are still careful. More than 60% worry about using AI fully because they do not trust its transparency, data safety, or security. They also fear unclear rules and unclear decisions.
A data breach in 2024 showed no AI system is completely safe from hacking. Security issues and bias keep being problems for AI in healthcare.
Explainable AI (XAI) is a new way to help people trust AI. XAI lets doctors see how AI makes decisions. This helps them understand and trust AI advice more. This is important especially for medical billing and coding where AI is used.
Good management, training, and ethics are needed to keep trust and safety. These are important for AI to be used widely in healthcare.
In the U.S., agencies like the Department of Health and Human Services, the Food and Drug Administration (FDA), and the Centers for Medicare & Medicaid Services (CMS), along with groups like AMA, lead AI policy.
The AMA keeps updating rules about AI. Their Digital Medicine Payment Advisory Group (DMPAG) works to fit AI tools into medical billing and payment. They help make AI adoption smoother and keep patient and doctor safety in mind.
Regulators want clear rules on:
Health groups and their leaders need to stay updated on policies to follow rules and be ready for new changes.
To use AI well in clinics, health managers and IT staff should follow steps:
Careful planning helps clinics use AI safely, lowers stress on doctors, keeps everyone accountable, and works better overall.
As AI grows in healthcare, handling doctor liability and making clear rules remain important to keep patients safe and responsible care. The AMA leads efforts that balance new technology with ethics. Clinic leaders must deal with these challenges by using clear, tested AI tools and strong training and oversight.
Using AI tools like phone automation can help clinics run better but must come with clear policies and ways to manage risks. Only with careful attention, teamwork, and clear rules will AI improve healthcare without risking doctor responsibility or patient trust.
The AMA defines augmented intelligence as AI’s assistive role that enhances human intelligence rather than replaces it, emphasizing collaboration between AI tools and clinicians to improve healthcare outcomes.
The AMA advocates for ethical, equitable, and responsible design and use of AI, emphasizing transparency to physicians and patients, oversight of AI tools, handling physician liability, and protecting data privacy and cybersecurity.
In 2024, 66% of physicians reported using AI tools, up from 38% in 2023. About 68% see some advantages, reflecting growing enthusiasm but also concerns about implementation and the need for clinical evidence to support adoption.
AI is transforming medical education by aiding educators and learners, enabling precision education, and becoming a subject for study, ultimately aiming to enhance precision health in patient care.
AI algorithms have the potential to transform practice management by improving administrative efficiency and reducing physician burden, but responsible development, implementation, and maintenance are critical to overcoming real-world challenges.
The AMA stresses the importance of transparency to both physicians and patients regarding AI tools, including what AI systems do, how they make decisions, and disclosing AI involvement in care and administrative processes.
The AMA policy highlights the importance of clarifying physician liability when AI tools are used, urging development of guidelines that ensure physicians are aware of their responsibilities while using AI in clinical practice.
CPT® codes provide a standardized language for reporting AI-enabled medical procedures and services, facilitating seamless processing, reimbursement, and analytics, with ongoing AMA support for coding, payment, and coverage pathways.
Challenges include ethical concerns, ensuring AI inclusivity and fairness, data privacy, cybersecurity risks, regulatory compliance, and maintaining physician trust during AI development and deployment phases.
The AMA suggests providing practical implementation guidance, clinical evidence, training resources, policy frameworks, and collaboration opportunities with technology leaders to help physicians confidently integrate AI into their workflows.