Recently, the number of patient messages that doctors get has grown a lot. This is especially true after the rise of telehealth and digital communication because of the COVID-19 pandemic. A study from the University of California San Diego (UCSD) School of Medicine shows doctors get about 200 patient messages every week. Handling all these messages is hard and can cause doctors to feel tired and stressed. This is a concern for healthcare leaders across the United States.
Doctors need to reply to patient messages with care and understanding, but the large number makes this difficult. Writing long, kind answers takes a lot of mental energy, especially after a long day. To help with this, places like UC San Diego Health have tried using AI to write first drafts of replies. These AI tools don’t replace the doctor’s decision but create a kind of first draft full of empathy. Then doctors can change and personalize the replies before sending them.
Empathy is important in healthcare communication because it helps patients feel understood and cared for. This can affect how happy they are with their care and how involved they stay. The UCSD study found that AI-made drafts led doctors to write longer, more detailed answers. Longer messages here usually mean better quality because there is more room to explain and show understanding.
The AI helps by creating kind and thoughtful replies that include important patient information and address feelings. This way, doctors can focus on making the tone right and fixing details instead of starting from zero. For example, Ming Tai-Seale, a co-author of the study, said that AI helps doctors get past “writer’s block” by offering empathy-filled drafts. This makes it easier for doctors to write thoughtful replies even after a tiring day.
Patients seem to like these more detailed and kind answers. This can make them more satisfied. When communication shows empathy, it lowers patient worry and helps build trust between patients and doctors. This trust is important for patients to follow medical advice and get better health results.
AI has clear benefits for handling many messages, but there are risks that healthcare leaders need to think about. One big worry is that AI might make patient care feel less personal. AI systems work based on data and rules. This might take away the personal touch that is important in medical care.
A recent article in the Journal of Medicine, Surgery, and Public Health talks about the “black-box” nature of many AI tools. This means people do not always know how these AI systems decide what to say or do. If patients don’t understand how their messages are handled, they might not trust the system. They may feel their care is done by a machine, not by a doctor who knows them.
Also, AI systems trained on biased data might increase unfair health differences. This matters a lot in the US, where some groups already have problems accessing healthcare. Practice leaders and IT managers should work with AI makers to make sure the data used to train AI is mixed and fair. They should check results carefully to make sure AI is accurate and fair. Being open with patients about how AI is used helps keep trust.
Ethics are very important when using AI for patient messages. The UC San Diego Health study says patients should know when AI helped make a message. Usually, a note is added to show that AI made the first draft and a doctor checked and changed it.
This honesty makes sure patients are not confused about where the message came from. It also helps keep trust because patients know that a real doctor is responsible for the final reply, even if AI helped.
Telehealth services in the US have made good written communication and empathy even more important. Unlike in-person visits, telehealth mostly uses messages and written notes. AI tools that write caring replies can make these online visits feel more personal, even if the patient and doctor are not in the same place.
The journal Telehealth and Medicine Today (THMT) talks about “Digital Empathy 2.0.” This idea means using AI to write messages that are careful, understand context, and feel personal. This goes beyond simple automatic answers and tries to use kind language that addresses what the patient is feeling.
Administrators thinking about using AI for telehealth should pick systems that support this level of empathy. Including doctors’ input when making and reviewing AI messages helps keep communication correct, relevant, and personalized for each patient.
Healthcare administrators and IT staff in the US should consider how AI fits into current work processes and electronic health record (EHR) systems. UC San Diego Health’s study was one of the first to test generative AI inside Epic Systems’ EHR, a popular tool in US medical offices.
AI drafts replies to patient messages right inside the doctor’s workflow. This reduces the mental work needed to write the first response. Doctors still spend time editing and personalizing the message, so total reply time doesn’t go down much. But AI helps because doctors don’t have to start writing from nothing, which can be very tiring over time.
AI automation can help handle many regular tasks tied to patient messages. This includes sorting messages, giving quick drafts for common questions, and adding patient history into replies.
Lowering doctor burnout is important. The pandemic made work harder because of more patient demands and the need for quick answers. AI help with message drafting lets doctors spend more time on harder medical decisions and direct patient care by taking care of routine communication.
When used carefully, AI can improve how well doctors and patients communicate. It can keep doctors more engaged and help staff stay healthy. These things are key to good, patient-centered care in the US.
Empathy in talking to patients is still very important to people in the United States. Research from UC San Diego Health shows AI-created replies do not make replies faster but do make longer, more detailed messages. Patients find these helpful and caring.
Patients who feel their worries are carefully answered are more likely to stay involved in their care and follow medical advice. This helps improve health. Better communication with AI can help meet the high expectations of US patients, who want quick and easy access to their providers.
The use of AI-written patient replies is a big change in how healthcare communicates. For US providers facing more messages, AI offers a practical way to help doctors while keeping the empathy needed for patient happiness. Learning how to balance technology with kind care will help medical leaders manage this new area well.
The study focuses on the use of generative AI to draft compassionate replies to patient messages within Epic Systems electronic health records, aiming to enhance physician-patient communication.
The study found that while AI-generated replies did not reduce physician response time, they did lower the cognitive burden on doctors by providing empathetic drafts that physicians could edit.
The senior author is Christopher Longhurst, MD, who is also the executive director of the Joan and Irwin Jacobs Center for Health Innovation.
It evaluated the quality of communication and the cognitive load on physicians, suggesting that AI can help mitigate burnout by facilitating more thoughtful responses.
AI is seen as a collaborative tool because it assists physicians by generating drafts that incorporate empathy, allowing doctors to respond more effectively to patient queries.
The COVID-19 pandemic led to an unprecedented rise in digital communications between patients and providers, creating a demand for timely responses which many physicians struggle to meet.
Generative AI helps by drafting longer, empathetic responses to patient messages, which can enhance the quality of communication while reducing the initial writing workload for physicians.
A greater response length typically indicates better quality of communication, as physicians can provide more comprehensive and empathetic replies to patients.
The study suggests a potential paradigm shift in healthcare communication, highlighting the need for further analysis on how AI-generated empathy impacts patient satisfaction.
UC San Diego Health, alongside the Jacobs Center for Health Innovation, is testing generative AI models to explore safe and effective applications in healthcare since May 2023.