Academic medical centers in places such as Boston and California have started using AI in ways that directly affect how care is given and how doctors talk to patients.
For example, assistant professor Adam Rodman from Harvard Medical School, working at Beth Israel Deaconess Medical Center, says that AI helps doctors get medical information almost instantly.
This lets doctors give care based on the latest research more quickly.
Having quick access to current information helps doctors make better decisions and could improve how they diagnose illness.
Large language models (LLMs), such as GPT-4, play an important role in this change.
Isaac Kohane, a top doctor-scientist, said these models have amazing abilities for diagnosing tough medical cases.
LLMs can look at large amounts of medical data faster than people can and give doctors second opinions in real-time.
This quick feedback might help doctors do their jobs better, but people still need to watch over AI tools because they can sometimes be wrong or biased.
Patients also get help from AI.
AI-powered communication tools are used in patient portals and phone systems to keep track of patients and reply to them.
For example, the University of Pennsylvania’s Abramson Cancer Center uses an AI text system called “Penny” that checks in daily with patients taking oral chemotherapy medicines to see if they take the medicine and watch for side effects.
If Penny finds any problems, it tells doctors quickly.
Patients often say this kind of chatbot feels like a “buddy,” showing how AI can help patients manage their health between doctor visits.
In another example, UC San Diego Health uses AI chatbots to write replies to non-urgent patient messages on portals.
Doctors review and change these AI drafts to make sure they are correct and sound human.
This lowers the work doctors must do but keeps communication caring and trustworthy.
A study showed that healthcare workers preferred chatbot answers more than doctors’ replies 78.6% of the time when judging how caring and complete they were.
Even though the chatbot replies were liked, human review is needed because AI can sometimes give wrong answers.
By improving how they talk, AI tools help doctors answer routine questions faster about appointments, prescriptions, and test results.
This leads to better patient involvement and higher satisfaction.
Even though AI has many good points, there are still problems.
One big worry is that AI systems might make existing healthcare gaps worse.
Many AI programs learn from data that reflect real-world biases.
For example, a skin cancer detection tool did not work well on very dark skin, showing the limits of AI when it is trained on data that is not diverse enough.
This can lower the quality of care for some groups of people and increase healthcare differences.
Another problem is the “black-box” nature of AI, meaning it can be hard for doctors and patients to know how AI makes certain recommendations.
This lack of clear information can lower patient trust, especially when AI is part of medical decisions.
Clear explanations about AI’s role are needed to keep trust in the technology.
There are also worries about AI “hallucinations,” where AI creates false or misleading information that seems true.
David Bates, a safety expert in healthcare, warns that these mistakes can affect medical records and patient safety.
This shows why strict human checks and validation are required.
Despite these issues, experts like Leo Celi think AI could help make health systems fairer if developers focus on people’s needs and use data that represents all groups.
One clear way AI helps academic medical centers in the U.S. is by automating daily work.
Doctors often get tired and frustrated because they spend too much time on paperwork and documentation.
AI is starting to help with this problem.
The Permanente Medical Group (TPMG) in Northern California studied how AI medical scribes affect doctors.
These AI scribes use language technology to change doctor-patient talks directly into medical notes and summaries.
They do not change medical decisions but help doctors by cutting down the time needed for writing notes.
Over 63 weeks, AI scribes helped TPMG doctors save about 1,794 workdays — nearly five years of time — by reducing note-taking done outside work hours and shorter appointment times.
Almost half of patients (47%) noticed their doctors looked at computer screens less during visits, and 39% felt their doctors paid more attention to them.
Doctors liked the tool too, with 84% saying it helped communication with patients and 82% saying it made work more satisfying.
This is important for fields like mental health, emergency medicine, and primary care where doctor burnout is high.
Automation lets doctors focus more on patient care instead of paperwork, improving patient satisfaction and doctor well-being.
Also, AI chatbots help manage patient messages.
Duke Health uses AI to answer common patient questions automatically, which helps reduce doctor burnout.
This gives doctors more time to handle difficult patient issues.
These automation tools are useful for medical administrators and IT managers who want to make operations better while still keeping good care and communication.
Because AI is changing quickly, academic medical centers must prepare doctors and systems properly.
Medical schooling is changing to include AI tools so future doctors learn how to use them well and stay flexible in healthcare.
Administrators should keep these points in mind when adding AI:
Academic medical centers in the U.S. are leading the way in using AI to improve patient care and the way doctors and patients communicate.
AI can take over routine tasks, help with decisions in real time, and support better communication.
This can reduce the load on doctors and help patients get better care.
At the same time, problems like bias, lack of clear explanations, and risks of wrong information mean that leaders must add AI carefully.
Finding a balance between AI’s efficiency and keeping care personal and kind will be key to success.
By managing AI well, offering ongoing education, and sticking to medical values, academic medical centers can use AI to better help patients and support healthcare workers across the country.
AI, particularly large language models, enables faster access to medical literature and enhances doctor-patient interactions, allowing physicians to provide evidence-based care instantaneously.
Integrating AI is expected to improve efficiency, reduce mistakes, ease burdens on primary care, and foster longer doctor-patient interactions, ultimately enhancing quality of care.
Existing data sets often reflect societal biases, which can reinforce gaps in access and quality of care, posing risks to disadvantaged groups.
AI can create false information and present it as real, which complicates its application in clinical settings where accuracy is crucial.
Ambient documentation promises to reduce physician burnout by automating note-taking, allowing doctors to focus more on patient interactions rather than administrative tasks.
AI tools facilitate accelerated learning for medical students, helping them synthesize information and prepare for clinical practice in evolving healthcare environments.
The study showed that LLMs performed slightly better than individual physicians and emphasized that many doctors lacked experience in using the technology.
AI can significantly enhance the identification of medication-related issues, addressing one of the most common sources of patient harm in healthcare settings.
AI models enable instant insights and predictions about molecular interactions, accelerating scientific progress in understanding diseases and developing treatments.
A human-centered design approach is necessary to navigate biases and ensure effective AI tools cater to diverse patient populations and enhance care.