Acceptance of AI technology means how willing users are to try, use, or adopt AI products and services. This idea is important as AI grows in many fields, including healthcare. A review published in Telematics and Informatics looked at 60 studies about how users accept AI in different industries. This review gives lessons for healthcare administrators thinking about AI.
From 7,912 articles at the start, only 60 were relevant for AI acceptance research. This small number shows the field is narrow and still developing. One main finding was that 31 out of 60 studies did not explain clearly what AI means, and 38 studies did not define AI to the participants. This makes it hard for users to trust or accept AI.
The Technology Acceptance Model (TAM) was used most in these studies to measure acceptance. In TAM, important factors are how useful users think AI is, their performance expectations, attitude, trust, and how much effort they expect to put in. These affect whether users want to use AI in their work or daily life.
In the U.S., healthcare places using AI solutions like Simbo AI’s phone automation tools can make communication easier and reduce admin work. It is important to know how staff and patients feel about AI for it to work well.
One big problem in current research is relying too much on self-reported data. Self-reports ask people how they feel or think about AI through surveys. This method has issues:
Because of these problems, researchers suggest using naturalistic methods, which means watching users in real settings without interfering. This helps see real interactions with AI and understand what helps or blocks people from using it.
For example, in U.S. healthcare settings thinking about AI for patient calls, naturalistic studies can show if staff really find AI easy to use and if patients respond well without feeling they have to give certain answers.
Naturalistic research collects data by watching users directly, checking usage logs, using automated tracking, and other ways that don’t bother users. This gives real evidence about how people accept AI. For healthcare managers, using these methods helps make better choices based on actual data.
Some benefits of using naturalistic and objective data are:
In U.S. healthcare, these methods are useful because clinics and hospitals are busy places. Watching how medical secretaries, IT staff, and patients use AI phone systems can find slow spots or acceptance problems that surveys miss.
The review also found culture affects how people accept AI. In places like the U.S., people value personal connection. AI cannot fully replace human contact, especially in healthcare where empathy and trust matter a lot.
Healthcare managers should know that some tasks, like showing care for patients or mental health help, may always need a real person.
For AI calling services like Simbo AI’s, a mixed approach works best. AI can answer simple questions, make appointments, or give information, but complicated or sensitive issues should go to human staff. Naturalistic research can help find the right mix of AI and people, based on the practice and patients.
AI affects front-office work in healthcare, where staff manage many calls, appointments, reminders, and questions about insurance. Tools like Simbo AI use AI to automate phone answering.
Effects of AI on workflow include:
Still, for AI systems to work well, staff must trust and want to use them. Research on real use, training, and culture helps. Naturalistic studies show how AI fits workflows and reveal problems or needed improvements.
For instance, looking at how front-desk workers use AI phone tools during busy and slow times can show where AI helps and where humans are still needed. This helps managers improve processes.
Even though it was not the main topic of the review, ethics and rules around AI in healthcare are important for owners and managers. Protecting patient privacy, avoiding discrimination, and keeping human care are key.
In the U.S., laws like HIPAA and FDA rules for certain AI devices set standards for data security and safety. AI tools for front-office work must follow these rules to build trust.
Using research beyond self-reports also helps check if patient privacy is really kept during automated calls and if AI is free from bias that might hurt patient care.
Based on these points, healthcare managers in the U.S. thinking about AI front-office tools should:
Artificial intelligence can help improve healthcare administration, such as by using Simbo AI’s phone automation. But whether users accept AI depends on many things like usefulness, trust, and culture.
Moving beyond self-reported surveys to naturalistic and objective studies is needed to truly understand how AI is used. For U.S. healthcare managers, using these research methods will help make better decisions, improve how AI is put in place, and support good, long-term use of AI in healthcare.
The review focused on user acceptance of artificial intelligence (AI) technology across multiple industries, investigating behavioral intention or willingness to use, buy, or try AI-based goods or services.
A total of 60 articles were included in the review after screening 7912 articles from multiple databases.
The extended Technology Acceptance Model (TAM) was the most frequently employed theory for evaluating user acceptance of AI technologies.
Perceived usefulness, performance expectancy, attitudes, trust, and effort expectancy were significant positive predictors of behavioral intention, willingness, and use of AI.
Yes, in some cultural situations, the intrinsic need for human contact could not be replaced or replicated by AI, regardless of its perceived usefulness or ease of use.
There is a lack of systematic synthesis and definition of AI in studies, and most rely on self-reported data, limiting understanding of actual AI technology adoption.
Future studies should use naturalistic methods to validate theoretical models predicting AI adoption and examine biases such as job security concerns and pre-existing knowledge influencing user intentions.
Acceptance is defined as the behavioral intention or willingness to use, buy, or try an AI good or service.
Only 22 out of the 60 studies defined AI for their participants; 38 studies did not provide a definition.
The acceptance factors applied across multiple industries, though the article does not specify particular sectors but implies broad applicability in personal, industrial, and social contexts.