Healthcare conversational agents are automated computer systems that talk with patients and healthcare providers using natural language. They are different from older automated systems because they try to copy human conversation. Their goal is to give personal advice, help people stay healthy, and support managing long-term diseases like diabetes, asthma, cancer, mental health issues, and COVID-19.
A recent review in 2023 looked at 23 studies about using conversational agents for personal healthcare. These agents used several methods, such as rule-based models, content retrieval systems, AI models, and affective computing—which helps recognize and respond to emotions. Although these systems helped in many situations, their ability to adjust conversations and personalize them is still limited, showing areas where they can improve.
Generative AI refers to AI tools that create content like text, images, or sounds. Large language models (LLMs), such as OpenAI’s ChatGPT, are examples. They generate text that sounds human based on the input and context they receive.
Using generative AI in healthcare conversational agents has several possible benefits:
One recent study showed how generative AI is used in assistive technologies, including healthcare. It showed that these AI systems help doctors and care workers by handling repetitive tasks, supporting decisions, and improving patient communication.
But there are important concerns to think about:
Future work needs to focus on setting standards to measure how well AI explains its decisions and how trustworthy it is. Also, systems should include human checks to keep them safe.
Healthcare conversational agents must do more than give facts. They also need to understand and react to human feelings. Affective computing is technology that can notice and respond to emotions in speech or text. This makes medical conversations more kind and useful.
Affective computing helps agents pick up on changes in mood or signs of distress. The agent can then answer in ways that show care, which improves patient involvement and following medical advice. For example, some conversational agents created with special learning methods have shown good results in mental health counseling. They give relevant emotional support and avoid wrong or pointless responses.
Because patient experience is important, conversational agents that can respond emotionally could help healthcare providers give good care without making staff’s work harder.
AI agents are changing the way doctors work by helping with complicated jobs like diagnosis and paperwork. For example, Microsoft’s AI Diagnostic Orchestrator reached 85.5% accuracy in diagnosing difficult medical cases, which is much better than many doctors who score about 20%.
Besides diagnosing, AI conversational agents with generative AI can reduce the paperwork doctors do. They can write records and send follow-up messages automatically. At Kaiser Permanente, AI scribes saved more than 15,000 hours of doctor time during over 2.5 million patient visits in just over a year. With less paperwork, medical staff can spend more time with patients, and doctors face less burnout, which is a growing problem because of staff shortages and more patients needing care.
For medical practice administrators and IT workers, AI can also help with front-office phone tasks. Simbo AI, for example, makes phone service automation tools using conversational AI.
Handling phone calls in healthcare takes a lot of time and resources. Automated voice agents that understand patient questions, schedule appointments, or connect callers to the right department can handle as many calls as 100 full-time workers. These systems make operations run smoother and make sure patients get answers even outside office hours.
Using AI in front-office jobs fits well with many U.S. medical practices that want to improve admin work but still give good patient service. AI agents can sort calls, gather patient details, and set up follow-ups, which lowers wait times and lightens staff workloads.
The growing use of AI shows a move from simple automation to smart systems that manage complex, changing workflows. For practice leaders and IT managers in the U.S., this means bringing in AI that can understand, think, and act inside clinical settings.
Key to good AI use is having modular design and secure data sharing. AI agents must work with Electronic Health Record (EHR) systems like Epic or Cerner, using standard connections like HL7 and FHIR. This lets AI safely access patient data and update records instantly. It allows smarter patient triage, order entry, and documentation without slowing daily tasks.
It is also very important to follow healthcare laws like HIPAA and GDPR when handling private patient data with AI. Systems need role-based access controls, data encryption, and audit logs from the start.
Companies like Lumeris have shown multi-agent AI platforms that handle many hospital tasks, such as patient triage and appointment scheduling. They make sure humans check AI outputs to avoid mistakes. This human-in-the-loop design balances automation benefits with safety and trust.
Scaling AI also needs cloud systems and standard ways to check performance so results stay steady. For U.S. medical practices, AI tools that fit easily into current IT and show clear benefits will work best.
Despite its promise, generative AI in healthcare conversational agents has serious risks about trust and safety. Medical practices need to carefully review AI providers and ask for clear information on how these agents make answers.
Explainability is key for trust. Medical staff must know why AI made a certain recommendation or gave specific information to judge if it is reliable. Without this, doctors might not want to use the technology daily.
Bias is another concern. If AI systems learn from data that does not represent all patient groups fairly, they might make unfair or wrong recommendations. This could make healthcare inequalities worse instead of better.
Researchers in the U.S. are working to find bias sources and ways to reduce them. AI that can adjust to each patient’s situation could help fix some problems. Still, thorough testing, ethical review, and human oversight are needed before using these tools widely.
Research in healthcare conversational agents suggests focusing on these areas:
Healthcare conversational agents with generative AI have the potential to change patient talks and support medical work in the United States. These systems are becoming more flexible, aware of context, and able to have human-like conversations. They can help with managing diseases, make front-office work easier, and lower doctor workloads.
But challenges remain in making sure these tools work safely, openly, and fairly. U.S. medical practice leaders and IT managers should focus on following rules, making AI explanations clear, and keeping humans in charge. Careful use of these technologies can improve health care while managing staff demands.
By watching research progress and working with trustworthy AI providers, healthcare groups can use conversational agents to meet patient needs, follow laws, and improve efficiency in a complex healthcare system.
Conversational agents (CAs) are automated systems designed to interact with users through human-like dialogue. They provide personalized healthcare interventions by delivering tailored advice, supporting self-management of diseases, and promoting healthy habits, thus improving health outcomes sustainably.
Healthcare CAs primarily assist patients dealing with diabetes, mental health issues, cancer, asthma, COVID-19, and other chronic conditions. They also focus on enhancing healthy behaviors to prevent disease onset or progression.
Key features include system flexibility in conversations, personalization of interaction based on user data, and affective characteristics such as recognizing and responding to user emotions to make interactions more engaging.
Development techniques include rule-based models (used in 7 studies), retrieval-based techniques for content delivery (11 studies), AI models (5 studies), and integration of affective computing (6 studies) to enhance personalization and emotional responsiveness.
Dialogue structures and personalization remain limited due to constrained adaptability to diverse user needs and contexts. Many systems still lack holistic user modeling and dynamic response generation, which restricts their ability to conduct truly human-like conversations.
Affective computing enables CAs to detect and respond to user emotions, improving engagement and adherence by providing empathetic, context-aware interactions that mimic human empathy and support user emotional needs during healthcare dialogues.
Generative AI can enable more natural, flexible, and context-aware conversations by producing human-like responses dynamically, supporting deeper personalization and better user engagement while addressing challenges related to safety and reliability.
A scoping review following the PRISMA Extension for Scoping Reviews was conducted, with systematic searches in Web of Science, PubMed, Scopus, and IEEE databases. Screening and characterization of relevant studies focused on personalized automated CAs within healthcare.
The research targets designers and developers of healthcare CAs, computational scientists, behavioral scientists, and biomedical engineers aiming to develop and improve personalized healthcare interventions using conversational agents.
Future research should integrate holistic user description methods and focus on safely implementing generative AI models and affective computing to unlock more adaptive, empathetic, and personalized healthcare conversations with users.