Bridging the Gap in Artificial Intelligence Adoption Research: The Need for Naturalistic Methods Beyond Self-Reported Data

Artificial intelligence (AI) is now a part of many industries, including healthcare. In the United States, hospitals and medical offices use AI more and more to improve how they work and care for patients. One area where AI helps is front-office tasks, like phone systems that answer patient calls quickly. Companies such as Simbo AI have created AI-powered phone systems that help make healthcare communication easier, reduce paperwork, and make patients happier.

Even with strong interest in AI tools like Simbo AI, healthcare leaders and IT managers sometimes find it hard to accept and use AI fully. One reason is how researchers study AI acceptance. Most studies rely on self-reported data. This means they ask people about their thoughts, plans, or feelings on using AI, instead of watching how people actually use it. Because of this, we do not know enough about how AI is used in real life or how people stick with it over time in healthcare.

This article looks at key points from a review by Sage Kelly, Sherrie-Anne Kaye, and Oscar Oviedo-Trespalacios in the journal Telematics and Informatics. They studied 60 articles from many industries to find out what affects people’s acceptance of AI. The article focuses on what these findings mean for healthcare leaders and IT staff in the United States. It also talks about why it is important to use research methods that watch real AI use in healthcare. Finally, it explains how AI tools like front-office phone systems fit into healthcare work.

Understanding AI Acceptance: Key Factors in Healthcare Settings

The studies reviewed began with 7,912 articles, but only 60 were studied closely. These studies looked at many industries, but the results still matter for healthcare leaders who work to use AI for better office work and patient care.

The main way to study AI acceptance was the extended Technology Acceptance Model (TAM). This model looks at whether people want to use AI based on:

  • Perceived Usefulness: How helpful AI is seen in getting tasks done.
  • Performance Expectancy: The hope that AI will improve job results or make the practice work better.
  • Attitudes Toward AI: Whether people feel good or bad about using AI.
  • Trust: Belief that AI will work well and keep patient data safe.
  • Effort Expectancy: How easy or hard it is to learn and use AI tools.

In healthcare, administrators and IT managers think about how AI can cut down manual work, lower wait times on calls, and make scheduling easier. These match with usefulness and performance expectations, which help people accept AI.

But many still have doubts. These come from cultural and emotional reasons. In some places, people want human contact. AI can’t replace human interaction fully. This matters a lot in healthcare because trust and relationships with patients are very important.

Limitations in Current Research: Why Self-Reported Data Falls Short

One big problem in most AI acceptance research is that it uses self-reported data a lot. This means asking people in surveys or interviews about their feelings and plans to use AI. While this helps, it has some problems in healthcare AI research:

  • Subjective Bias: What people say they will do may not match what they really do with AI.
  • Social Desirability: People might give answers they think are right instead of what they really feel.
  • Limited Context: Surveys don’t show the real challenges that healthcare workers face when using AI in busy offices.
  • Overlooked Long-Term Use: Self-reports often only show short-term plans and miss long-term use and satisfaction.

Because of these issues, self-reported data is less useful for healthcare leaders and IT managers who need strong proof about AI’s impact and acceptance to make decisions.

The Case for Naturalistic Research Methods in Healthcare AI Adoption

Naturalistic research means watching and measuring how AI is used in real healthcare places. It does not depend only on surveys but studies actual use. This kind of research gives a better picture of how AI tools, like Simbo AI’s phone systems, work every day. It looks at things like:

  • How staff and patients really use the AI systems.
  • Problems found when putting AI into use.
  • Effects on how well work gets done and patient satisfaction.
  • Trust or frustrated feelings in real time.
  • Cultural or workplace factors that affect acceptance.

The researchers Kelly, Kaye, and Oviedo-Trespalacios suggest using more naturalistic studies to check existing theories like TAM and better understand AI use in real situations. This is very important in healthcare because of patient safety, data privacy rules, and complex workflows.

For healthcare leaders in the U.S., using this approach means relying on key performance indicators (KPIs), user logs, and watching AI use closely rather than only using surveys from staff or patients.

AI and Workflow Automation in Healthcare: Enhancing Front-Office Efficiency

Workflow automation is one key way AI improves healthcare management. Many hospitals and clinics in the U.S. get a large number of patient calls about appointments, referrals, bills, and questions. Slow phone answers or mistakes cause patient frustration and slow down office work.

Simbo AI focuses on AI-powered phone systems that answer calls automatically and route them well using language processing and voice recognition. This type of AI helps by:

  • Reducing Wait Times: Patients get quick help without waiting on hold.
  • Improving Accuracy: AI understands patient requests and sends calls to the right place.
  • Freeing Staff Time: Staff can spend time on harder work instead of answering phones all day.
  • Supporting After-Hours Calls: AI handles calls when staff are not working.
  • Enhancing Patient Engagement: Quick replies help patients see the practice as easier to reach.

Still, for these AI systems to be accepted, leaders must see them as useful, reliable, and easy to use. Trust is also important, so talking clearly about how patient data is handled helps build confidence. Cultural differences matter too—some groups or areas in the U.S. may want more human contact. This means mixing AI with human help may be best.

From the IT side, connecting Simbo AI technology means making sure it works with current electronic health record (EHR) systems and follows HIPAA privacy rules. This adds challenges, but it can be done with good planning.

Practical Considerations for U.S. Healthcare Administrators and IT Managers

In real life, hospital leaders and IT managers should think about these points when deciding on AI use:

  • Assess Usefulness in Context: Check how AI helps in tasks like handling calls and scheduling. Look for real gains in speed or cost.
  • Consider Staff Attitudes: Talk with healthcare workers early to hear their worries, like fear of losing jobs or trouble with new technology. Address these openly.
  • Build Trust Through Transparency: Explain clearly how privacy and data safety are handled in AI systems.
  • Look at Effort Expectancy: Choose AI that is easy to learn and fits well with existing software and workflows.
  • Use Naturalistic Data for Decisions: Watch how AI is really used, keep getting feedback, and change plans based on what is seen, not just survey answers.
  • Respect Patient Preferences: Know that some patients may not want AI for certain tasks. Plan hybrid systems that allow switching to human help.
  • Include Cultural Sensitivity: Be aware of regional and demographic differences in the U.S. that affect how people accept AI.

Using these steps will help healthcare groups handle AI adoption better, for tools like Simbo AI’s phone system.

Summary of Research Authors and Source

This article is based mainly on a review by Sage Kelly, Sherrie-Anne Kaye, and Oscar Oviedo-Trespalacios. It was published in the journal Telematics and Informatics by Elsevier. The review looked at 60 studies out of 7,912 articles from five databases. Their work shows what is missing and difficult in how we understand AI acceptance. It stresses using real-world research methods rather than just self-reports.

In short, healthcare leaders, practice owners, and IT managers in the United States should pay attention to how AI fits into daily work, the attitudes of staff and patients, and cultural differences. Using real-world research methods, along with focusing on useful, trustworthy, and easy-to-use AI systems like Simbo AI’s phone tools, will support meaningful AI use that helps deliver better healthcare.

Frequently Asked Questions

What was the main focus of the systematic review in the article?

The review focused on user acceptance of artificial intelligence (AI) technology across multiple industries, investigating behavioral intention or willingness to use, buy, or try AI-based goods or services.

How many studies were included in the systematic review?

A total of 60 articles were included in the review after screening 7912 articles from multiple databases.

What theory was most frequently used to assess user acceptance of AI technologies?

The extended Technology Acceptance Model (TAM) was the most frequently employed theory for evaluating user acceptance of AI technologies.

Which factors significantly positively influenced AI acceptance and use?

Perceived usefulness, performance expectancy, attitudes, trust, and effort expectancy were significant positive predictors of behavioral intention, willingness, and use of AI.

Did the review find any cultural limitations to AI acceptance?

Yes, in some cultural situations, the intrinsic need for human contact could not be replaced or replicated by AI, regardless of its perceived usefulness or ease of use.

What gap does the review identify in current AI acceptance research?

There is a lack of systematic synthesis and definition of AI in studies, and most rely on self-reported data, limiting understanding of actual AI technology adoption.

What does the article recommend for future research on AI acceptance?

Future studies should use naturalistic methods to validate theoretical models predicting AI adoption and examine biases such as job security concerns and pre-existing knowledge influencing user intentions.

How is acceptance of AI defined in the review?

Acceptance is defined as the behavioral intention or willingness to use, buy, or try an AI good or service.

How many studies defined AI for their participants?

Only 22 out of the 60 studies defined AI for their participants; 38 studies did not provide a definition.

What industries did the review find AI acceptance factors applied to?

The acceptance factors applied across multiple industries, though the article does not specify particular sectors but implies broad applicability in personal, industrial, and social contexts.