In the modern healthcare environment in the United States, medical practice administrators, healthcare owners, and IT managers face many challenges because more patients want answers quickly. One big problem is the rise in electronic messages from patients who need guidance or reassurance. Studies show that doctors at busy health centers often get hundreds of patient messages every week. Managing so many messages while keeping good communication is hard and has led to doctors feeling very tired and stressed.
Artificial intelligence (AI), especially generative AI like large language models (LLMs) and chatbots, can help healthcare providers with these communication needs. These AI tools can make detailed, caring message drafts that doctors can check and change before sending to patients. This article talks about the role of AI-generated empathy in healthcare answers in the U.S., shares research findings, and looks at how AI can improve work processes in healthcare.
The COVID-19 pandemic made digital communication in healthcare rise quickly. Patients now often use portals and messaging platforms to ask questions about symptoms, treatments, medicines, and appointments.
At places like UC San Diego Health, doctors get about 200 messages each week on average. This much messaging means they have to reply fast but thoughtfully, which can be hard because they have limited time. Writing detailed, caring replies adds to their stress and tiredness, which can make job satisfaction go down and increase burnout risk.
A study at UC San Diego Health looked into how generative AI could help doctors by writing first drafts of patient messages. Even though AI did not cut down response times, it helped lower the mental effort of writing caring responses. Doctors could fix and customize AI drafts to make sure they were correct and sounded personal before sending them. This kept human control while giving a helpful start.
Research at the UC San Diego School of Medicine, published in the Journal of the American Medical Association’s Network Open, found that AI can write longer and more caring messages than doctors usually have time to do. This means AI might help make patient-doctor talks better by encouraging more thoughtful and kind replies.
AI systems can read a lot of patient information and requests, give answers that show empathy, and handle repeated or simple questions. This skill can help doctors when they face “writer’s block” during busy days and feel too tired to write detailed answers.
Marlene Millen, MD, a co-author of the UC San Diego study, said AI does not get tired and can write caring messages even after long shifts. Doctors liked having these drafts as a base, so they could focus on harder patient problems.
Research at Massachusetts General Hospital found AI tools like ChatGPT can give scientifically correct and easier-to-understand answers about questions, such as those about colonoscopy procedures. Sometimes, AI answers scored better in clarity than official hospital websites. AI replies were also often kinder than doctors’ messages in online forums. This may be because AI does not have busy workloads or time stress.
These studies show AI can improve emotional quality in patient messages without replacing human decisions. AI messages clearly tell patients they were made by AI and checked by doctors, keeping things honest.
Physician burnout is a well-known problem in American healthcare. Paperwork and communication tasks from digital patient contact add a lot to stress. Generative AI can help lower mental load by creating a template or first draft, so doctors do not have to start writing from nothing.
Christopher Longhurst, MD, the executive director of the Joan and Irwin Jacobs Center for Health Innovation and senior author of the UC San Diego study, said AI can help handle growing communication needs without taking over the doctor’s job. AI drafts let doctors spend more time on medical decisions and complex patient care instead of spending too much time on messages.
Even though AI does not make reply times faster, it may stop burnout by reducing the mental tiredness of message writing. Starting with an empathy-focused draft helps doctors keep good communication, which is important for patient satisfaction and health.
Using AI in healthcare communication is not just about writing caring messages. It also means making work processes better in medical offices. Answering many patient calls and messages well needs a system that can sort questions and give quick, correct answers while letting people check them.
Simbo AI, a company in the U.S. that works on phone automation and answering services using AI, makes tools for these needs. Their AI solutions handle routine calls and give first answers, letting office staff and doctors focus on harder tasks.
Workflow automation with AI can:
Good automation tools can join office and clinical work, helping practices manage patient communication without losing quality or personal touch.
While AI has benefits, using it in healthcare communication needs careful attention to ethics, privacy, and quality. Generative AI models learn from large datasets that may have biases or old information, which can cause errors if not checked. There is also risk of wrong or misunderstood info if AI is not supervised well.
People must watch AI results to ensure messages are medically accurate, fit the situation, and respect patient needs. Healthcare groups need rules to control AI use and tell patients when automation is part of messaging.
Experts stress the need for honesty in AI use so patients trust that their talks are safe and doctors stay responsible for care and messages.
Healthcare IT managers and administrators should train staff to use AI tools well, balancing automation with human contact, and follow legal and ethical rules including patient data safety.
Large language models are an important part of AI. They can make human-like text answers using wide-ranging healthcare data. Research shows LLMs can:
Studies show good LLM use needs careful interface design, doctor guidance, and constant checking. The goal is not to replace doctors but to help them talk better and lower paperwork.
Places like University of California San Diego Health and Massachusetts General Hospital lead research on AI in healthcare communication. Their results suggest AI can be a helpful partner in patient engagement, improving interaction quality and helping handle doctor workload.
Support and partnerships with innovation centers, such as the Joan and Irwin Jacobs Center for Health Innovation, have helped test generative AI models in clinics. As AI use grows, these studies will guide safe, ethical, and effective use across healthcare systems in the U.S.
In the United States, healthcare administrators, owners, and IT managers are using AI to manage growing patient communication and prevent doctor burnout. Generative AI, including large language models and chatbots, offers practical help by writing caring messages that doctors can edit and personalize. This improves communication quality without losing accuracy or privacy.
AI-powered workflow automation in front-office phone systems, like those made by Simbo AI, helps by managing patient questions and appointments more efficiently. But using AI in healthcare needs careful human oversight, attention to ethics, and staff training to keep safety and trust.
As healthcare becomes more digital, AI tools have potential to improve patient-provider communication while respecting both technology’s role and the human need for caring and judgment.
The study focuses on the use of generative AI to draft compassionate replies to patient messages within Epic Systems electronic health records, aiming to enhance physician-patient communication.
The study found that while AI-generated replies did not reduce physician response time, they did lower the cognitive burden on doctors by providing empathetic drafts that physicians could edit.
The senior author is Christopher Longhurst, MD, who is also the executive director of the Joan and Irwin Jacobs Center for Health Innovation.
It evaluated the quality of communication and the cognitive load on physicians, suggesting that AI can help mitigate burnout by facilitating more thoughtful responses.
AI is seen as a collaborative tool because it assists physicians by generating drafts that incorporate empathy, allowing doctors to respond more effectively to patient queries.
The COVID-19 pandemic led to an unprecedented rise in digital communications between patients and providers, creating a demand for timely responses which many physicians struggle to meet.
Generative AI helps by drafting longer, empathetic responses to patient messages, which can enhance the quality of communication while reducing the initial writing workload for physicians.
A greater response length typically indicates better quality of communication, as physicians can provide more comprehensive and empathetic replies to patients.
The study suggests a potential paradigm shift in healthcare communication, highlighting the need for further analysis on how AI-generated empathy impacts patient satisfaction.
UC San Diego Health, alongside the Jacobs Center for Health Innovation, is testing generative AI models to explore safe and effective applications in healthcare since May 2023.