The Impact of Perceived Competence on Responsibility Attribution and Stakeholder Attitudes in AI-Mediated Crisis Communication Scenarios

Crisis communication in healthcare means giving emergency information to patients, staff, and others during important events like medical emergencies, system problems, or unexpected service disruptions. Usually, human workers answer calls and handle this communication at hospitals or clinics. But during emergencies, these workers often get many calls and must make quick choices.

AI chatbots, especially those using generative AI like ChatGPT, have been introduced to help by answering phone calls automatically. These chatbots can give clear instructions or change messages based on updates. This helps reduce delays and supports healthcare staff in answering patient questions better.

Understanding Perceived Competence and Its Effects

One main finding from the study is the importance of perceived competence—how capable and reliable people think the chatbot is. This idea affects how patients and others judge what happens during crisis communication.

  • Perceived competence directly influences stakeholder satisfaction: When people think chatbots are capable and trustworthy, they feel better about the communication, whether the chatbot gives instructions or updates.
  • Responsibility attribution is changed by perceived competence: If communication or actions fail, people blame a competent chatbot less than they would blame a human. When things go well, responses from competent chatbots lead to less blame placed on the organization for mistakes or delays.

Differentiating Instructing vs. Adjusting Information

The study looks at two types of communication used by AI chatbots in crises:

  • Instructing information: Clear, direct instructions telling patients what to do right away. For example, “Call the emergency team,” “Go to the hospital,” or “Stay on the line.”
  • Adjusting information: Updates or changes that modify ongoing messages based on new details. This could be changing appointment times, telling about service changes, or explaining delays.

The study showed that when healthcare groups fail to meet crisis needs, chatbots giving instructing information made people more satisfied and blamed the group less. This approach seemed more capable because it gave clear steps when other help was weak.

When requests were successful, chatbots giving adjusting information got better satisfaction. These chatbots were seen as more capable in handling real-time changes and making the communication clearer during the crisis.

AI Chatbots Compared to Human Agents in Healthcare Settings

The study involved 709 people in several tests. It found that AI chatbots, especially if seen as capable, can do better or work well with humans in crisis communication.

  • Chatbots giving instructing information did better than humans during failure events. When hospitals or clinics couldn’t fix crisis questions right away (like appointment problems or ER wait times), chatbots giving clear directions got better reactions from people.
  • Humans are still better when emotional understanding and complex decisions are needed. But using AI along with humans can improve overall work and trust.

The research suggests using chatbots for first responses and simple guidance, especially when call numbers are high. Harder or more sensitive cases should go to human workers.

Implications for U.S. Medical Practice Administrators and Owners

Medical offices in the U.S. have a hard job managing patient calls during emergencies. Front office staff get overwhelmed when there are many calls during flu seasons, health alerts, or system problems. AI chatbots can help by sorting calls, answering fast, and cutting patient wait times.

  • Improving Patient Satisfaction: AI chatbots that are seen as capable help keep calm during crises. Giving clear instructions when things go wrong makes patients less frustrated and less likely to blame the practice.
  • Reducing Staff Burnout: AI answering services take pressure off front office staff during busy times. This lowers mistakes and lets staff focus on harder patient needs.
  • Increasing Operational Continuity: Using AI helps deliver important messages like changes in clinic hours or emergency rules quickly and clearly, reducing confusion.

IT managers in healthcare should think about adding AI to existing phone systems and electronic health records. This can make front-office work faster and more accurate.

AI and Workflow Integration in Healthcare Communication

AI use in crisis communication should go beyond answering phones. Medical practice leaders can also use AI to improve other work tasks.

  • Automated Call Routing: AI can quickly sort caller questions and send urgent ones to human agents or clinicians. This makes responses faster.
  • Appointment and Follow-up Coordination: When crises disrupt schedules, AI phone systems can change and share new appointment info without human help.
  • Data-Driven Communication: AI chatbots with access to patient records or local emergency data can give replies based on personal health or local risks.
  • Feedback Collection and Analysis: Automated systems can gather patient opinions after calls. This helps medical offices check how well crisis communication is working.
  • Cost Efficiency: By needing fewer human operators, AI-supported systems lower labor costs and increase communication ability during busy times.

Using AI for phone automation in U.S. medical practices helps improve patient communication and save resources. IT managers must make sure AI systems are clear about how they work, protect patient privacy, follow HIPAA rules, and stay updated for new crisis situations.

Strategic Recommendations from the Research

Based on the study by Xiao and Yu, medical practice teams can try these ideas when using AI chatbots for crisis communication:

  • Use chatbots to give instructing information during failure situations to build patient trust and lower frustration.
  • Use chatbots to give adjusting information during successful requests to keep trust and openness.
  • Keep checking chatbot performance, focusing on making people see the chatbot as capable, since this affects patient satisfaction and blame.
  • Make plans that combine AI with human help, sending hard or emotional cases to people.
  • Train staff to understand chatbot messages and step in smoothly when needed.
  • Watch patient feedback on AI interactions and improve chatbot replies over time.
  • Make sure all chatbot use follows healthcare rules and data protection laws in the U.S.

Final Thoughts

AI’s role in healthcare front-office communication is changing quickly. The research by Yi Xiao and Shubin Yu shows how chatbot ability affects patient and stakeholder views during crises. Medical practice leaders and IT managers across the U.S. should carefully and actively use AI-driven crisis communication tools to improve response, patient experience, and workflow.

Done well, these AI systems can share the communication load with humans, lower blame during failures, and increase satisfaction during successes. As healthcare emergencies keep challenging communication, using chatbots alongside human oversight might provide a balanced and effective way to manage front-office crisis communication in U.S. medical practices.

Frequently Asked Questions

Can ChatGPT replace humans in crisis communication?

ChatGPT and GenAI-powered chatbots have demonstrated the ability to handle crisis-related questions in a timely and cost-efficient manner, potentially replacing humans in crisis communication under certain conditions.

How do chatbots perform when organizations fail to handle crisis-related requests?

When organizations fail to address crisis requests, chatbots providing instructing information result in higher stakeholder satisfaction and lower responsibility attribution, as they are perceived to be more competent in these scenarios.

What is the difference between instructing and adjusting information in crisis communication?

Instructing information provides direct guidance on what actions to take, while adjusting information offers updates or modifies existing information based on current circumstances and request success or failure.

How does stakeholder satisfaction vary with chatbot competence?

Stakeholders exhibit greater satisfaction and more positive attitudes toward high-competence chatbots, regardless of whether they provide instructing or adjusting information during public emergency crises.

What role does perceived competence play in crisis chatbot communication?

Perceived competence mediates the relationship between chatbot communication style and stakeholder satisfaction and responsibility attribution, influencing how stakeholders evaluate chatbot performance.

How should organizations integrate chatbots and human agents in crisis communication?

Organizations should strategically combine chatbots and human agents based on context, leveraging chatbots for instructing information when requests fail and adjusting information when requests succeed to optimize communication effectiveness.

What are the implications of AI-mediated crisis communication for stakeholders’ responsibility attribution?

Stakeholders tend to attribute lower responsibility to chatbots that provide appropriate instructing or adjusting information, particularly when the chatbot is perceived as highly competent, reducing blame on the organization.

Which communication strategy (instructing vs. adjusting) is better when requests succeed?

When crisis-related requests succeed, chatbots providing adjusting information result in higher stakeholder satisfaction and lower responsibility attribution due to their perceived competence.

What insights does this research provide to crisis communication theory?

The study extends situational crisis communication theory to include nonhuman touchpoints like AI chatbots, enriching understanding of crisis communication dynamics with AI agents.

What are the practical recommendations for crisis chatbot development?

Improving crisis chatbot competence and tailoring communication style to situational context enhances stakeholder satisfaction and responsibility perception, guiding better chatbot design and deployment strategies.