In recent years, doctors have felt careful hope about AI in healthcare. According to the American Medical Association’s (AMA) 2024 survey, about 68% of U.S. doctors see some benefit in using AI tools. This is a bit higher than before. The number of doctors actually using AI grew fast—from 38% in 2023 to 66% in 2024.
Even with this interest, doctors still have worries. Around 40% feel both excited and nervous about AI. They value AI’s help but have real concerns. Many worry about how AI might affect the relationship between doctor and patient. Specifically, 39% said AI might reduce human contact during care. Also, 41% are worried about data privacy and security because health information is very private.
Dr. Jesse M. Ehrenfeld, who was the AMA’s president until recently, said it’s very important to keep the human side in healthcare. He said, “Patients need to know there is a human being on the other end helping guide their course of care.” This idea is very important in healthcare where trust and communication matter a lot.
Also, lawyers and doctors worry about who is responsible if AI makes a wrong decision. In 2023, 87% of doctors said it is very important to protect themselves from being blamed for mistakes caused by AI. They also want good medical liability insurance.
Doctors think AI can help most in three main ways: better diagnoses, making work faster, and improving patient results. The AMA survey shows:
Many doctors also see AI as a way to reduce paperwork. About 57% say automating admin tasks is a big chance for AI. These tasks include writing notes, billing, working with insurance, and handling patient records.
More than half (54%) say AI could help with documents like billing codes, medical charts, and visit notes. Nearly half (48%) think AI can speed up insurance processes, which is helpful for practice managers who want fewer delays.
Doctors want to clearly understand how AI works. About 78% want simple explanations about how AI makes decisions. They want proof that AI is safe, effective, and accurate from other similar uses. They also want ongoing checks to find mistakes or bias.
Doctors look to rules and oversight to trust AI. The AMA asks for clear, consistent rules to make sure AI is fair, ethical, responsible, and open about how it works. This includes checking AI tools after they start being used to solve problems and keep them safe and fair.
A study from Saudi Arabia looked at factors that affect doctors’ views on AI in gastroenterology. These factors include gender, age, specialty, years of work, and the work setting. Doctors with more experience or who know AI better tend to accept it more.
Even though the U.S. healthcare system is different, some patterns might be similar here. Older doctors often prefer traditional judgments, while younger doctors are usually more comfortable with new technology. This shows training must match different doctors’ backgrounds to increase AI use.
For healthcare managers and IT staff, a big question is how AI can make work flow better in medical offices. Admin work takes a lot of time and energy, which can distract from patient care.
AI tools that answer phones and handle appointments—like Simbo AI—are one way to help. They automate phone calls, schedule appointments, and answer patient questions. This lowers stress for front-desk workers, shortens patient wait times, and makes sure no calls are missed.
AI also shows promise in speeding up insurance approvals, a task that has often slowed down patient care. The AMA survey shows 48% of doctors see this as a key benefit. Automating insurance can reduce mistakes, save money, and cut wait times.
In clinical documentation, AI helps make accurate medical records. This not only helps billing codes but also frees doctors to spend more time with patients. AI can create discharge papers, treatment plans, and progress notes while following health rules.
Integration with electronic health record (EHR) systems is very important. About 84% of doctors say AI should smoothly connect with their current systems. This reduces duplicated work and errors and makes doctor workflows easier. IT managers also focus on this when choosing tech for clinics.
With AI use rising from 38% in 2023 to 66% in 2024, healthcare leaders need plans to use AI well. The quick growth shows doctors want new tools but careful steps are needed.
Doctors in the U.S. are more open to AI in healthcare, but many concerns remain. Issues like patient privacy, keeping the human touch in care, how to connect AI with other tools, and liability worries still matter. Yet, AI shows promise in helping doctors make better diagnoses, handle admin work faster, and improve patient results.
Medical leaders, clinic owners, and IT managers have important roles in guiding AI use. They must focus on AI tools that are safe, clear, well-connected with other systems, and support staff with training. This can help use AI safely without losing patient trust or care quality.
As more doctors adopt AI, it will become a key part of healthcare in the U.S. Groups that carefully bring in AI and listen to doctor concerns can improve efficiency and patient care. This supports healthcare teams and the patients they serve.
Physicians have guarded enthusiasm for AI in healthcare, with nearly two-thirds seeing advantages, although only 38% were actively using it at the time of the survey.
Physicians are particularly concerned about AI’s impact on the patient-physician relationship and patient privacy, with 39% worried about relationship impacts and 41% about privacy.
The AMA emphasizes that AI must be ethical, equitable, responsible, and transparent, ensuring human oversight in clinical decision-making.
Physicians believe AI can enhance diagnostic ability (72%), work efficiency (69%), and clinical outcomes (61%).
Promising AI functionalities include documentation automation (54%), insurance prior authorization (48%), and creating care plans (43%).
Physicians want clear information on AI decision-making, efficacy demonstrated in similar practices, and ongoing performance monitoring.
Policymakers should ensure regulatory clarity, limit liability for AI performance, and promote collaboration between regulators and AI developers.
The AMA survey showed that 78% of physicians seek clear explanations of AI decisions, demonstrated usefulness, and performance monitoring information.
The AMA advocates for transparency in automated systems used by insurers, requiring disclosure of their operation and fairness.
Developers must conduct post-market surveillance to ensure continued safety and equity, making relevant information available to users.