Trust is very important when using AI in healthcare. Healthcare involves tough decisions that affect patients directly. Because of this, using AI in healthcare needs careful thought. Rosemary Tufon from Kennesaw State University studied how people trust AI in healthcare in the United States. She found that trust depends more on how institutions act than just having humans check AI results.
Tufon’s study named three important parts that make people trust AI systems:
These parts help patients and staff trust AI recommendations. Surprisingly, having healthcare workers watch AI output closely did not make much difference in trust. This shows that having strong systems in place is more helpful than always watching AI results.
Healthcare leaders should share clear policies, protect data strongly, make reporting open, and have ways to hold people responsible. These steps help patients and staff feel safe and confident with AI use.
Besides trust, ethics and bias in AI are big concerns. AI learns from data and how it is made affects its results. Matthew G. Hanna and others wrote about three main ways bias can happen in medical AI:
If AI does not handle these biases well, it can give wrong advice or bad diagnoses. This can hurt people who are already at risk. It is important to check AI continuously from when it is made until after it is used.
Healthcare IT managers and leaders must demand good testing of AI tools before using them. Vendors should prove they look for bias and follow ethics. AI decisions should be clear enough so doctors and patients understand why it says what it does. This helps keep trust and responsibility. AI systems also need ongoing checks to catch problems that may develop over time, especially if clinical practices change.
AI governance means rules and steps to make sure AI works safely and respects social values. Research shows many business leaders see clear explanations, ethics, bias, and trust as major problems when using AI.
Good governance in healthcare means managing risks, being open about AI use, holding people accountable, and following privacy laws like HIPAA.
Hospital leaders, owners, compliance officers, and IT directors must make clear rules for using AI in ways that follow medical ethics and laws. Groups including doctors, ethicists, lawyers, and data experts should watch over AI to be sure it is safe and fair.
The U.S. does not have as many AI rules as the European Union, but hospitals should prepare for tighter controls. They can do this by creating records of how AI is used, checking AI often, and including AI risk in normal healthcare safety plans.
Healthcare leaders should ask AI suppliers for proof that their tools are safe. This includes independent tests, safety reviews, and ways humans can take control if something goes wrong.
AI can help improve how clinics run daily tasks, though this is sometimes overlooked. Automation with AI can help set appointments, handle patient calls, manage billing, and work with electronic health records (EHR). These tools help use resources better, make patients’ experience smoother, and reduce work for staff.
Predictive tools can guess how many patients will come, how many staff members are needed, and what equipment should be ready. This helps avoid overbooking and keeps work balanced for everyone.
Simbo AI is an example of a company making AI phone systems for healthcare offices. Their system answers patient calls, sets appointments, gives directions, and does first checks using natural language and learning from data. This helps make sure calls are not missed, waits are shorter, and staff can focus on more important work.
To trust such automation, clinics need the same strong safety steps as mentioned earlier. Patients and staff need to know that answers are correct, privacy is kept, and real people can help if needed.
Adding AI automation also must work well with other healthcare computer systems like EHRs. IT and medical teams should work together so automation does not cause mistakes. Regular training and feedback help staff trust these systems.
Medical practice leaders and IT managers in the U.S. should take careful steps when adding AI tools. Key points to remember include:
By using clear and strong institutional safeguards, healthcare organizations in the United States can build lasting trust in AI. Studies show that good systems are more helpful for trust than just continuous human checking. When combined with clear rules, handling of bias, and thoughtful use of automation, AI can help make healthcare safer, more efficient, and more focused on patients.
The research focuses on understanding the trust-building process in human-AI interactions within healthcare, particularly examining institutional trust factors and human oversight to explain users’ willingness to accept AI-driven healthcare recommendations.
Modeling trust is difficult due to disparities in how trust is conceptualized and measured, and because trust drivers extend beyond system performance to include nuanced factors like institutional accountability and human oversight.
Situational normality, structural assurance, and cognitive reputation are key institutional factors that enhance trusting beliefs in healthcare AI systems.
Contrary to expectations, healthcare professional oversight, as a human-in-the-loop factor, showed no significant impact on users’ trusting beliefs in AI recommendations.
Disease severity does not moderate the relationship between trusting beliefs and acceptance intention but has a direct influence on the willingness to accept AI healthcare recommendations.
The study employed a web survey of U.S. adults aged 18+, analyzing data using Partial Least Squares Structural Equation Modeling (PLS-SEM) to validate the trust model.
Strong institutional safeguards and assurances positively shape patient trust in AI technologies, highlighting the critical role of institutional trust in high-risk settings like healthcare.
The research challenges the HITL model by showing that perceived human oversight may not be essential for building trust or acceptance of AI healthcare recommendations.
Healthcare organizations should focus on creating and communicating reliable institutional safeguards and assurance mechanisms to foster patient trust in AI tools rather than relying solely on human oversight.
Trusting beliefs consistently impact individual intention to accept AI recommendations regardless of disease severity, underscoring trust as a universal driver of acceptance.