Crisis communication in healthcare means giving emergency information to patients, staff, and others during important events like medical emergencies, system problems, or unexpected service disruptions. Usually, human workers answer calls and handle this communication at hospitals or clinics. But during emergencies, these workers often get many calls and must make quick choices.
AI chatbots, especially those using generative AI like ChatGPT, have been introduced to help by answering phone calls automatically. These chatbots can give clear instructions or change messages based on updates. This helps reduce delays and supports healthcare staff in answering patient questions better.
One main finding from the study is the importance of perceived competence—how capable and reliable people think the chatbot is. This idea affects how patients and others judge what happens during crisis communication.
The study looks at two types of communication used by AI chatbots in crises:
The study showed that when healthcare groups fail to meet crisis needs, chatbots giving instructing information made people more satisfied and blamed the group less. This approach seemed more capable because it gave clear steps when other help was weak.
When requests were successful, chatbots giving adjusting information got better satisfaction. These chatbots were seen as more capable in handling real-time changes and making the communication clearer during the crisis.
The study involved 709 people in several tests. It found that AI chatbots, especially if seen as capable, can do better or work well with humans in crisis communication.
The research suggests using chatbots for first responses and simple guidance, especially when call numbers are high. Harder or more sensitive cases should go to human workers.
Medical offices in the U.S. have a hard job managing patient calls during emergencies. Front office staff get overwhelmed when there are many calls during flu seasons, health alerts, or system problems. AI chatbots can help by sorting calls, answering fast, and cutting patient wait times.
IT managers in healthcare should think about adding AI to existing phone systems and electronic health records. This can make front-office work faster and more accurate.
AI use in crisis communication should go beyond answering phones. Medical practice leaders can also use AI to improve other work tasks.
Using AI for phone automation in U.S. medical practices helps improve patient communication and save resources. IT managers must make sure AI systems are clear about how they work, protect patient privacy, follow HIPAA rules, and stay updated for new crisis situations.
Based on the study by Xiao and Yu, medical practice teams can try these ideas when using AI chatbots for crisis communication:
AI’s role in healthcare front-office communication is changing quickly. The research by Yi Xiao and Shubin Yu shows how chatbot ability affects patient and stakeholder views during crises. Medical practice leaders and IT managers across the U.S. should carefully and actively use AI-driven crisis communication tools to improve response, patient experience, and workflow.
Done well, these AI systems can share the communication load with humans, lower blame during failures, and increase satisfaction during successes. As healthcare emergencies keep challenging communication, using chatbots alongside human oversight might provide a balanced and effective way to manage front-office crisis communication in U.S. medical practices.
ChatGPT and GenAI-powered chatbots have demonstrated the ability to handle crisis-related questions in a timely and cost-efficient manner, potentially replacing humans in crisis communication under certain conditions.
When organizations fail to address crisis requests, chatbots providing instructing information result in higher stakeholder satisfaction and lower responsibility attribution, as they are perceived to be more competent in these scenarios.
Instructing information provides direct guidance on what actions to take, while adjusting information offers updates or modifies existing information based on current circumstances and request success or failure.
Stakeholders exhibit greater satisfaction and more positive attitudes toward high-competence chatbots, regardless of whether they provide instructing or adjusting information during public emergency crises.
Perceived competence mediates the relationship between chatbot communication style and stakeholder satisfaction and responsibility attribution, influencing how stakeholders evaluate chatbot performance.
Organizations should strategically combine chatbots and human agents based on context, leveraging chatbots for instructing information when requests fail and adjusting information when requests succeed to optimize communication effectiveness.
Stakeholders tend to attribute lower responsibility to chatbots that provide appropriate instructing or adjusting information, particularly when the chatbot is perceived as highly competent, reducing blame on the organization.
When crisis-related requests succeed, chatbots providing adjusting information result in higher stakeholder satisfaction and lower responsibility attribution due to their perceived competence.
The study extends situational crisis communication theory to include nonhuman touchpoints like AI chatbots, enriching understanding of crisis communication dynamics with AI agents.
Improving crisis chatbot competence and tailoring communication style to situational context enhances stakeholder satisfaction and responsibility perception, guiding better chatbot design and deployment strategies.