Empathy is important in healthcare. It helps patients feel listened to and cared for. In real-life medical visits, empathy builds trust, improves talking, and can help patients follow medical advice. With AI growing in healthcare, developers try to add empathy to chatbots to get some of these benefits.
But new research shows limits in how AI shows empathy. Lennart Seitz did a study in 2024 called Artificial empathy in healthcare chatbots: Does it feel authentic? He found that when chatbots try to express feelings like humans do, users often feel it is fake. This makes them trust the chatbot less, even if it seems warm and kind.
Perceived authenticity means users feel the chatbot’s feelings or answers are real, not just scripted. When chatbots try too hard to seem human, users notice and trust drops.
On the other hand, chatbots giving instrumental support—which means practical help like sharing information, sending reminders, or giving advice—fit better with what users expect AI to do. This kind of empathy, called “empathetic helping,” avoids fake feelings and helps users engage more.
Healthcare workers and IT managers in the United States have special challenges using AI chatbots. Patients want honest, clear, and reliable communication about their health.
Seitz’s 2024 research shows that while some empathy makes the chatbot seem warmer, it does not always build trust unless the chatbot feels authentic. For example, if a chatbot acts too personal or pretends to have feelings, it might seem fake. This makes people doubtful and use it less.
This doubt can cause real problems. Less trust means fewer people will use the chatbot. Patients might feel unhappy, and medical staff lose chances to save time through automation. In the U.S., rules about patient privacy and data security are very strict. Chatbots must not behave in ways that seem tricky or dishonest.
So, medical offices should think carefully when choosing chatbots like those from Simbo AI. The chatbot should not feel cold or overly emotional. It should give clear, helpful answers and sound friendly.
Studies showed chatbots focused on instrumental support are seen as more real. Users think of them as tools made to help, not as beings trying to feel like humans. This matches what people expect from AI.
Beyond simple chatbots, generative AI is starting to offer more complex emotional help in healthcare. In 2025, Riccardo Volpato, Lisa DeBruine, and Simone Stumpf reviewed trust issues with generative AI in health emotional support.
They explain that trust is complicated. For AI to give emotional help well, users must see it as predictable, clear, and reliable. But AI is not truly conscious, so building trust in emotional support is hard.
The authors say it’s important to balance polite, empathetic words with clear, useful help. This balance is key in U.S. healthcare chatbots, where patients and doctors want strict privacy rules followed and accurate advice.
AI chatbots cannot truly feel human emotions. Vendors and practice leaders should avoid chatbots that try too hard to act human. Instead, they should focus on clear messages, helpful actions, and polite tone to gain trust.
Trust improves when patients know how the AI works, what it can do, and when a human is available. Chatbots should explain their limits clearly and offer ways to talk to a person.
Adding features like appointment reminders, help with insurance, symptom checks, or prescription alerts gives useful support. This matches what users expect from technology and helps build trust.
Practice staff should regularly check chatbot answers to keep them professional and helpful. Patient feedback can guide changes in tone and functions to make users more comfortable.
Make sure AI follows HIPAA and rules about patient data privacy and security. Trust breaks down quickly if data is not handled properly.
Healthcare groups in the U.S. need to add AI systems like Simbo AI’s tools without hurting patient care. Combining phone automation and AI answering should support office work smoothly.
Simbo AI’s automation can take routine calls, set appointments, and handle medicine refills while staying clear and respectful. This frees staff to focus on more complex patient needs.
Administrative work in U.S. medical offices is heavy. It includes answering calls, checking insurance, and arranging referrals. AI answering that gives practical help reduces wait times and mistakes. This lowers staff stress.
Simbo AI’s tools can connect with patient portals, electronic health records, and other systems. This keeps messages consistent and allows quick updates on patient requests. It helps the office work better together.
AI systems can record calls and chats. This keeps good records of patient communication and helps follow federal and state healthcare laws.
With more patients, especially in cities and areas with few doctors, automated AI helps offices handle more communication without needing much more staff.
The way users see authenticity in healthcare chatbots affects how much they trust them. This is important for medical managers and IT teams in the United States. Research by Lennart Seitz and others shows that while empathy can make chatbots seem warmer, fake emotions lower trust. Chatbots should focus on practical, helpful support instead.
For offices using AI like Simbo AI’s phone automation and answering, being clear, setting the right expectations, and putting AI into existing workflows can improve efficiency and patient care. Doing this helps healthcare workers use AI to support communication, lessen work, follow rules, and keep the trust of patients and staff.
The main challenge is that experiential expressions of empathy may feel inauthentic to users, which can have unintended negative consequences, such as reducing trust and engagement with the chatbot.
Perceived authenticity is crucial; when chatbots display empathetic or sympathetic responses, their authenticity decreases, which suppresses the positive effect empathy usually has on trust and intentions to use the chatbot.
The studies compared empathetic (feeling with), sympathetic (feeling for), behavioral-empathetic (empathetic helping), and non-empathetic responses to evaluate their impact on perceived warmth, authenticity, and trust.
Instrumental support aligns better with users’ computer-like schema of chatbots, making it feel more authentic and avoiding the backfiring effects caused by inauthentic experiential empathy.
Empathy does not apply equally to human-bot interactions; unlike human-human interactions, where empathy enhances authenticity and trust, chatbot empathy can reduce perceived authenticity and trust.
Perceived warmth is users’ impression of friendliness and care. Any kind of empathy in chatbots increases perceived warmth, which generally supports trust but is moderated by authenticity perceptions.
Reduced perceived authenticity suppresses the positive effects of empathy on trust and usage intentions, potentially diminishing chatbot effectiveness in healthcare settings.
Two experimental studies with healthcare chatbots assessed how different empathetic responses influenced perceived warmth, authenticity, trust, and usage intentions, followed by a third study on human-human interactions for comparison.
Designers should avoid relying on experiential empathy expressions and instead focus on providing instrumental support to foster authenticity, trust, and effective user engagement with healthcare AI agents.
The research introduces ‘perceived authenticity’ as a distinct factor influencing the effectiveness of empathetic behaviors in chatbots, highlighting that human-like empathy may backfire without authentic perception.