Among many AI applications, healthcare AI agents—such as chatbots and automated phone answering systems—play a growing role in front-office functions.
These AI agents help with scheduling appointments, giving health information, and supporting patient questions. This helps medical practices reduce staff workload, improve patient flow, and increase patient satisfaction.
However, designing AI agents for healthcare is not easy. One big question is how AI should show empathy when talking with patients.
Research from Elsevier Inc., led by Lennart Seitz and published in 2024, shows that healthcare AI agents programmed with experiential empathy—that is, expressing emotions like “feeling with” or “feeling for” a patient—often seem fake. This can make patients trust the AI less and use it less.
In contrast, healthcare chatbots that offer instrumental support—practical help that shows care without pretending to feel emotions—get more trust and have better user participation.
This article talks about designing healthcare AI agents with a focus on instrumental support instead of experiential empathy. It also looks at how these ideas apply to healthcare providers in the U.S., showing practical ways AI can fit into front-office work to improve efficiency and patient experience.
Empathy is an important part of how people talk to each other in healthcare. Doctors, nurses, and office staff who show empathy can calm anxious patients, help patients follow advice, and improve health results.
When AI agents are used in healthcare, developers try to copy these human feelings using empathetic responses.
Seitz’s research shows that when healthcare AI chatbots try to show empathy by copying human emotions—called experiential empathy—users often see this as fake or even unhelpful. This lowers how real the chatbot seems. When the AI seems less real, people trust it less and are less willing to use it.
Since trust is very important in healthcare communication, making AI seem less real can reduce how well the AI works.
The research points out three kinds of empathy expressions in AI agents:
Tests showed that experiential and sympathetic empathy make the AI seem warmer or friendlier but also less real. But behavioral-empathetic responses, which focus on practical help, keep the AI looking real and build trust.
This is very important for healthcare leaders and IT managers thinking about using AI assistants for patients in the U.S. Patients usually want healthcare AI to be helpful, reliable, and clear. AI that seems too emotional or too human-like can make people trust it less and use it less.
Healthcare office managers and owners in the U.S. face many challenges. They handle many patients, need clear communication, and often have small front-office teams. AI agents that answer calls, handle common questions, or book appointments can help if they are designed well.
According to research on chatbot empathy, U.S. healthcare groups should use AI systems that focus on instrumental support instead of trying to make AI seem emotional. Focusing on practical help fits better with what patients expect from a computer agent. For example:
In these ways, the AI gives help that users see as real, trustworthy, and useful.
U.S. healthcare providers also need to think about cultural and population differences when designing AI agents because empathy is shown differently by different groups. Instrumental support is easier to keep consistent and personalize using AI systems with data. This creates good experiences for different kinds of patients.
Also, clear and non-emotional communication helps healthcare systems follow rules like HIPAA. These rules protect patient privacy and require clear communication. AI that seems too human may confuse users about what the assistant is. This can lead to misunderstanding medical advice or how data is shared.
Instrumental support in healthcare AI means giving practical help and answers that meet patient needs without trying to act like a human with real feelings.
This means the AI focuses on:
By doing this, AI agents offer good care support that raises both how friendly the AI seems and how real it looks.
Patients see the AI as helpful and reliable, so they want to use it more often.
For hospital leaders, AI systems focused on practical support help avoid problems caused by fake or wrong empathetic replies. Those wrong replies can make calls longer or cause patient unhappiness.
The idea of “perceived authenticity” is important here. This means whether users think the AI is real and honest when it talks. Seitz’s research used perceived authenticity to measure this. Keeping this feeling helps with trust and makes patients want to use the AI again.
Unlike people talking to people, where empathy usually builds trust naturally, AI empathy must be careful not to seem fake.
Building AI with instrumental support also makes it easier to connect with medical systems like electronic health records (EHRs). For example, AI can check patient identities, look up appointments, or update insurance details fast. This gives staff more time for harder tasks.
AI has big potential to automate front-office tasks in U.S. healthcare groups. Admin jobs like answering phone calls, sorting patient requests, and managing appointments take up a lot of staff time.
AI agents, like those from companies such as Simbo AI, focus on automating phone services and answering systems to handle these jobs well.
AI workflow automation can do things like:
Because U.S. healthcare rules, patient groups, and workflows differ across states, AI automation has to be flexible and meet local laws.
AI focused on instrumental support fits well in these settings. It does not try to replace human empathy but helps with office efficiency while making clear what the AI can and cannot do.
Using AI in front-office tasks lets medical practices lower admin costs, improve patient satisfaction by answering faster, and let staff spend more time on direct patient care.
Healthcare leaders, owners, and IT managers who plan to use AI should think about these guidelines:
As AI technology improves and healthcare providers try to meet more demand for good service and efficiency, focusing AI on instrumental support is the clearer and more reliable way.
Seitz’s 2024 study shows that focusing on practical, behavior-based help instead of emotional imitation makes patients see AI as real and trustworthy.
Companies that make phone automation and answering systems—like Simbo AI—can offer healthcare practices AI tools that reduce front-office work, improve patient communication, and follow U.S. healthcare rules.
With good design and fitting AI into workflows, healthcare AI agents will become important helpers for patient interactions.
Using AI in healthcare front offices will succeed if AI meets patient needs without pretending to be human.
Instrumental support offers a clear way to balance this and improve healthcare service in the United States.
This article gives healthcare administrators, owners, and IT managers in the U.S. guidance to choose and use AI agents that focus on practical, real interactions instead of copying complex human emotions.
Focusing on instrumental support keeps user trust and matches AI abilities with the real work done in healthcare front offices.
The main challenge is that experiential expressions of empathy may feel inauthentic to users, which can have unintended negative consequences, such as reducing trust and engagement with the chatbot.
Perceived authenticity is crucial; when chatbots display empathetic or sympathetic responses, their authenticity decreases, which suppresses the positive effect empathy usually has on trust and intentions to use the chatbot.
The studies compared empathetic (feeling with), sympathetic (feeling for), behavioral-empathetic (empathetic helping), and non-empathetic responses to evaluate their impact on perceived warmth, authenticity, and trust.
Instrumental support aligns better with users’ computer-like schema of chatbots, making it feel more authentic and avoiding the backfiring effects caused by inauthentic experiential empathy.
Empathy does not apply equally to human-bot interactions; unlike human-human interactions, where empathy enhances authenticity and trust, chatbot empathy can reduce perceived authenticity and trust.
Perceived warmth is users’ impression of friendliness and care. Any kind of empathy in chatbots increases perceived warmth, which generally supports trust but is moderated by authenticity perceptions.
Reduced perceived authenticity suppresses the positive effects of empathy on trust and usage intentions, potentially diminishing chatbot effectiveness in healthcare settings.
Two experimental studies with healthcare chatbots assessed how different empathetic responses influenced perceived warmth, authenticity, trust, and usage intentions, followed by a third study on human-human interactions for comparison.
Designers should avoid relying on experiential empathy expressions and instead focus on providing instrumental support to foster authenticity, trust, and effective user engagement with healthcare AI agents.
The research introduces ‘perceived authenticity’ as a distinct factor influencing the effectiveness of empathetic behaviors in chatbots, highlighting that human-like empathy may backfire without authentic perception.