Conversational agents are computer programs that use natural language processing (NLP) to talk with users like a person. In healthcare, these agents appear in many forms such as chatbots on websites, voice assistants on phone calls, virtual nursing or coaching apps, and systems that help with patient triage and health monitoring.
A study by Madison Milne-Ives and colleagues at the University of Oxford looked at 31 studies of different conversational agents from 2008 onward. These included 14 chatbots (2 with voice), 6 embodied agents like interactive voice systems and virtual patients, and question-answer or triage systems. Most studies showed positive or mixed results on how well they worked and how easy they were to use.
Usability means how easy it is for people to use a system and get what they want from it. The review found that 27 out of 30 studies said conversational healthcare agents were easy to use. Also, 26 of 31 studies found users were happy with these AI tools.
These results show many patients and healthcare workers found these conversational agents helpful and easy to access. The ability of natural language processing to understand speech or text was important for this. Systems that replied quickly and correctly helped make patient-provider communication smoother.
Still, some feedback showed problems. Patients sometimes said the AI did not fully understand complicated medical questions or gave answers that sounded scripted. Healthcare staff had trouble fitting AI agents into their existing workflows or electronic medical records (EMRs). These issues caused mixed feelings, even though the usability scores were mostly good.
From a design point of view, conversational healthcare AI faces several challenges that affect how well they are used:
Healthcare AI agents are used to help in different tasks:
In the study by Milne-Ives and team, 23 out of 30 studies showed positive or mixed results for these tasks. Automating these activities lets healthcare staff focus on complex cases that need human skills. Yet, some user feedback pointed out limits in how well AI agents did these jobs, showing there is room to improve.
Medical practice leaders in the U.S. know that using AI conversational agents means more than just buying a chatbot. The real value is how well these tools fit daily work and help frontline staff.
U.S. medical practices follow strict privacy laws like the Health Insurance Portability and Accountability Act (HIPAA) that make data security very important. Trust in conversational agents depends a lot on clear handling of patient data and open communication about how data is kept and used.
Reviews show that research should keep focusing on improving privacy and security in healthcare AI. U.S. providers must follow federal and state rules when choosing AI systems and putting them into practice.
Although current conversational agents show promise in healthcare, researchers and managers see ways to improve in the future:
These changes will help more U.S. medical practices use conversational AI, benefiting patients and healthcare staff.
Healthcare leaders in the U.S. should weigh the design and usability problems of conversational AI agents against the benefits they offer. Knowing current issues like limits in language understanding, privacy needs, and fitting with workflows can help pick the best AI tools for their office. Keeping up with new research and vendor updates will help make sure these investments improve patient care, staff work, and overall quality.
Conversational healthcare AI agents support behavior change, treatment support, health monitoring, training, triage, and screening tasks. These tasks, when automated, can free clinicians for complex work and increase public access to healthcare services.
The review aimed to assess the effectiveness and usability of conversational agents in healthcare and identify user preferences and dislikes to guide future research and development.
The review searched PubMed, Medline (Ovid), EMBASE, CINAHL, Web of Science, and the Association for Computing Machinery Digital Library for articles since 2008.
Agents included 14 chatbots (2 voice), 6 embodied conversational agents (incorporating voice calls, virtual patients, speech screening), 1 contextual question-answering agent, and 1 voice recognition triage system.
Most studies (23/30) reported positive or mixed effectiveness, and usability and satisfaction metrics were strong in 27/30 and 26/31 studies respectively.
Qualitative feedback showed user perceptions were mixed, with specific limitations in usability and effectiveness highlighted, indicating room for improvement.
Future studies should improve design and reporting quality to better evaluate usefulness, address cost-effectiveness, and ensure privacy and security.
NLP enables unconstrained natural language conversations, allowing agents to understand and respond to user inputs in a human-like manner, critical for effective healthcare interaction.
The review was funded by the Sir David Cooksey Fellowship at the University of Oxford; though some authors worked for a voice AI company, they had no editorial influence on the paper.
Keywords include artificial intelligence, avatar, chatbot, conversational agent, digital health, intelligent assistant, speech recognition, virtual assistant, virtual coach, virtual nursing, and voice recognition software.