Artificial intelligence (AI) is now a part of many industries, including healthcare. In the United States, hospitals and medical offices use AI more and more to improve how they work and care for patients. One area where AI helps is front-office tasks, like phone systems that answer patient calls quickly. Companies such as Simbo AI have created AI-powered phone systems that help make healthcare communication easier, reduce paperwork, and make patients happier.
Even with strong interest in AI tools like Simbo AI, healthcare leaders and IT managers sometimes find it hard to accept and use AI fully. One reason is how researchers study AI acceptance. Most studies rely on self-reported data. This means they ask people about their thoughts, plans, or feelings on using AI, instead of watching how people actually use it. Because of this, we do not know enough about how AI is used in real life or how people stick with it over time in healthcare.
This article looks at key points from a review by Sage Kelly, Sherrie-Anne Kaye, and Oscar Oviedo-Trespalacios in the journal Telematics and Informatics. They studied 60 articles from many industries to find out what affects people’s acceptance of AI. The article focuses on what these findings mean for healthcare leaders and IT staff in the United States. It also talks about why it is important to use research methods that watch real AI use in healthcare. Finally, it explains how AI tools like front-office phone systems fit into healthcare work.
The studies reviewed began with 7,912 articles, but only 60 were studied closely. These studies looked at many industries, but the results still matter for healthcare leaders who work to use AI for better office work and patient care.
The main way to study AI acceptance was the extended Technology Acceptance Model (TAM). This model looks at whether people want to use AI based on:
In healthcare, administrators and IT managers think about how AI can cut down manual work, lower wait times on calls, and make scheduling easier. These match with usefulness and performance expectations, which help people accept AI.
But many still have doubts. These come from cultural and emotional reasons. In some places, people want human contact. AI can’t replace human interaction fully. This matters a lot in healthcare because trust and relationships with patients are very important.
One big problem in most AI acceptance research is that it uses self-reported data a lot. This means asking people in surveys or interviews about their feelings and plans to use AI. While this helps, it has some problems in healthcare AI research:
Because of these issues, self-reported data is less useful for healthcare leaders and IT managers who need strong proof about AI’s impact and acceptance to make decisions.
Naturalistic research means watching and measuring how AI is used in real healthcare places. It does not depend only on surveys but studies actual use. This kind of research gives a better picture of how AI tools, like Simbo AI’s phone systems, work every day. It looks at things like:
The researchers Kelly, Kaye, and Oviedo-Trespalacios suggest using more naturalistic studies to check existing theories like TAM and better understand AI use in real situations. This is very important in healthcare because of patient safety, data privacy rules, and complex workflows.
For healthcare leaders in the U.S., using this approach means relying on key performance indicators (KPIs), user logs, and watching AI use closely rather than only using surveys from staff or patients.
Workflow automation is one key way AI improves healthcare management. Many hospitals and clinics in the U.S. get a large number of patient calls about appointments, referrals, bills, and questions. Slow phone answers or mistakes cause patient frustration and slow down office work.
Simbo AI focuses on AI-powered phone systems that answer calls automatically and route them well using language processing and voice recognition. This type of AI helps by:
Still, for these AI systems to be accepted, leaders must see them as useful, reliable, and easy to use. Trust is also important, so talking clearly about how patient data is handled helps build confidence. Cultural differences matter too—some groups or areas in the U.S. may want more human contact. This means mixing AI with human help may be best.
From the IT side, connecting Simbo AI technology means making sure it works with current electronic health record (EHR) systems and follows HIPAA privacy rules. This adds challenges, but it can be done with good planning.
In real life, hospital leaders and IT managers should think about these points when deciding on AI use:
Using these steps will help healthcare groups handle AI adoption better, for tools like Simbo AI’s phone system.
This article is based mainly on a review by Sage Kelly, Sherrie-Anne Kaye, and Oscar Oviedo-Trespalacios. It was published in the journal Telematics and Informatics by Elsevier. The review looked at 60 studies out of 7,912 articles from five databases. Their work shows what is missing and difficult in how we understand AI acceptance. It stresses using real-world research methods rather than just self-reports.
In short, healthcare leaders, practice owners, and IT managers in the United States should pay attention to how AI fits into daily work, the attitudes of staff and patients, and cultural differences. Using real-world research methods, along with focusing on useful, trustworthy, and easy-to-use AI systems like Simbo AI’s phone tools, will support meaningful AI use that helps deliver better healthcare.
The review focused on user acceptance of artificial intelligence (AI) technology across multiple industries, investigating behavioral intention or willingness to use, buy, or try AI-based goods or services.
A total of 60 articles were included in the review after screening 7912 articles from multiple databases.
The extended Technology Acceptance Model (TAM) was the most frequently employed theory for evaluating user acceptance of AI technologies.
Perceived usefulness, performance expectancy, attitudes, trust, and effort expectancy were significant positive predictors of behavioral intention, willingness, and use of AI.
Yes, in some cultural situations, the intrinsic need for human contact could not be replaced or replicated by AI, regardless of its perceived usefulness or ease of use.
There is a lack of systematic synthesis and definition of AI in studies, and most rely on self-reported data, limiting understanding of actual AI technology adoption.
Future studies should use naturalistic methods to validate theoretical models predicting AI adoption and examine biases such as job security concerns and pre-existing knowledge influencing user intentions.
Acceptance is defined as the behavioral intention or willingness to use, buy, or try an AI good or service.
Only 22 out of the 60 studies defined AI for their participants; 38 studies did not provide a definition.
The acceptance factors applied across multiple industries, though the article does not specify particular sectors but implies broad applicability in personal, industrial, and social contexts.