In recent years, the number of patient messages sent to doctors through Electronic Health Record (EHR) portals has grown a lot. For example, doctors at NYU Langone Health saw more than a 30% yearly increase in messages through the EPIC In Basket system. Some doctors said they get over 150 messages every day. This adds a lot of work that goes beyond normal office hours. At the same time, almost half of doctors say they feel burned out. Burnout means feeling very tired and less able to do their job well. Too many messages not only increase work after hours but also make it harder for doctors to spend time with patients directly.
Managing patient messages well is important for medical offices, owners, and IT staff. They want to keep doctors from getting too tired while keeping patients happy. Answering messages by hand takes up a lot of time and energy. New ways to help doctors answer messages without losing a caring and kind tone are needed.
AI, especially models like GPT-4, is being used to help write answers to patient messages in EHRs. Researchers at NYU Grossman School of Medicine studied an AI system inside the EPIC EHR. This AI wrote draft replies to patient questions using only private patient data to keep things safe and correct. Sixteen primary care doctors reviewed 344 pairs of message drafts. One in each pair was written by AI, the other by a human doctor. The doctors did not know which was which.
The study found that AI answers matched human answers in accuracy, completeness, and relevance. The AI messages were 9.5% better in tone and were seen as more empathetic by 125% of doctors compared to human replies. AI used more positive and friendly language 62% more often than humans. This kind of language can help patients trust and feel more involved in their care.
This shows AI can help doctors without losing the human feelings of concern and care. It can reduce how much work doctors have and make the patient experience better.
Even with these benefits, one problem from the NYU study was that AI messages are more complex. They tend to be 38% longer and 31% harder to read than human messages. Human answers were written at about a sixth-grade reading level, but AI answers were at about an eighth-grade level.
Harder language can make it tougher for patients to understand, especially those who find medical language difficult. Health literacy is how well a person can read and understand health information and make decisions. If the language is too hard, patients might not follow up properly, take medicine the wrong way, or miss important care steps. This can hurt their health.
Many patient groups in the U.S. include older adults, people with less education, or those who do not speak English well. These groups can find it especially difficult to understand complicated medical messages.
Promotes Clear Understanding: Patients can better understand their health and treatments.
Enhances Patient Engagement: Clear language helps patients take part in their care.
Reduces Miscommunication Risks: Prevents mistakes with medicine, appointments, or follow-up instructions.
Addresses Health Literacy Gaps: Makes messages easier for all patients to read.
Supports Health Equity: Ensures no patient is left behind because of hard language.
Healthcare leaders know that if messages are not clear, even smart and caring AI messages will not help patients. Clear communication is important.
Training AI Models for Plain Language Use: AI can be taught to write health messages at a sixth-grade reading level or below. Using real patient instructions and teaching materials helps AI write simple and correct responses.
Implementing Controlled Vocabulary and Readability Checks: AI systems can check messages automatically to find hard words or long sentences and suggest easier ways to say them before sending.
Customizing AI Based on Patient Profile: AI can adjust how difficult the language is based on the patient’s age, language skills, and preferences. This keeps messages just right—not too easy or too hard.
Human Review as a Safety Net: Doctors or trained staff should look over AI drafts before sending to make sure they are accurate and might simplify the language more.
Using Feedback Loops for Continuous Improvement: Healthcare teams can ask patients if messages are clear and use this feedback to make AI responses better over time.
AI can also help automate front-office tasks like answering phones, scheduling appointments, and handling patient questions. Simbo AI is a company that uses AI to handle routine phone calls from patients efficiently. When AI helps with writing messages and answering calls, it can reduce work and wait times.
These tools can check patient requests, answer common questions, and schedule visits without needing a human to do every step. This helps front-office staff focus on more complex tasks and patients.
Integrating AI into systems like EPIC means patient questions can be managed through phones, messages, and emails all at once. AI helps write, send, and remind patients about their care.
Medical administrators who use AI report better efficiency, happier patients, and lower cost from less overtime work.
Data Privacy and Security: AI tools must follow HIPAA rules to keep patient data safe. Using AI in secure systems reduces risks.
Ethics and Transparency: Patients should know when AI helps write messages to keep trust.
Provider Acceptance: Doctors need to review AI messages to keep good medical judgment.
Training and Support: Staff need training to understand AI tools and their limits.
Ongoing Evaluation: AI messages should be checked regularly for clarity, patient satisfaction, and accuracy.
The NYU study shows AI messaging can help healthcare but needs more work. Making AI language easier to read should be a top goal. Medical offices and technology developers in the U.S. need to work together to improve AI so it helps all patients.
Using AI with doctor approval in digital messages could become normal soon. This will help lower doctor burnout and improve patient care. Also, more use of phone automation and digital AI tools like Simbo AI will change how medical offices run their work.
For healthcare leaders, using AI in ways that focus on patient understanding and careful planning is a good way to meet today’s communication needs and provide patient-centered care.
This article explains the challenges of using AI to write health messages, how it affects patient involvement, and simple solutions for medical offices in the United States. By making AI language easier and using automation smartly, doctors and staff can better help patients and reduce their own workload.
The AI tool addresses the significant after-hours burden on physicians caused by a 30% annual increase in electronic health record (EHR) messages, particularly In Basket communications, which contribute to physician burnout due to long hours spent managing patient inquiries.
The AI uses generative artificial intelligence, specifically GPT-4, to draft responses to patient messages within the EHR system. It produces human-like, context-sensitive replies that incorporate patient-specific data to accurately and empathetically address patient concerns.
The study found no statistical difference in accuracy, completeness, and relevance between AI and human responses. AI responses scored higher in empathy and tone, being 125% more empathetic and 62% more positive and affiliative than human replies.
AI responses were 38% longer and 31% more likely to use complex language, writing at an eighth-grade level compared to the human sixth-grade level, indicating the need for further training to simplify AI language.
Using private patient information, rather than general internet data, allows the AI tool to generate more relevant and accurate responses tailored to individual patients, better reflecting real-world use and enhancing communication quality.
By efficiently drafting empathetic and accurate responses to patient queries, AI can significantly reduce physicians’ after-hours workload, decreasing message volume they must handle directly and potentially alleviating burnout.
Human providers should review and approve AI-generated drafts before sending to ensure clinical accuracy, appropriateness, and adherence to medical standards, maintaining patient safety and trust.
AI responses were rated as more understandable and empathetic, improving the positivity and affiliation tone in communication, which fosters a sense of partnership and hopefulness between patients and providers.
The AI tool is built on generative AI technology, specifically a private instance of GPT-4, which generates next-word predictions to create coherent, contextually relevant responses within the EPIC EHR system.
Researchers suggest further training to simplify AI language complexity, the need for additional studies on private data’s impact, and broader adoption of physician-reviewed AI drafts to improve provider efficiency and patient experience in EHR communications.