Many AI tools in healthcare show good accuracy in labs and research settings. For example, Google Health’s AI for spotting diabetic retinopathy—damage to the eye caused by diabetes—achieved over 90 percent accuracy in eye scan tests under controlled conditions. This was similar to what human specialists can do and made people hopeful the technology could help with faster screenings.
But when used in clinics in Thailand, the AI ran into problems. More than 20 percent of images were rejected because of bad lighting, poor image quality, or slow internet. This caused delays, upset nurses, and led to extra visits for patients. Nurses said that needing very high-quality images slowed down care instead of speeding it up.
These issues show a bigger problem: rules for AI in healthcare mostly look at how accurate the AI is technically. They do not focus on whether the AI improves patient care or fits well into daily clinic work. Experts like Emma Beede, a user experience researcher at Google Health, say it is important to understand how AI tools fit into real settings before using them widely. She said, “We have to understand how AI tools are going to work for people in context—especially in health care—before they’re widely deployed.”
Michael Abramoff, an eye doctor and AI researcher, said, “there is more to health care than algorithms.” He noted that human doctors often disagree and AI should take this into account. These points suggest that healthcare leaders should not rush to use AI without thinking about how it will really work and how people will use it.
Bringing AI into healthcare is not just about adding new technology. It means carefully understanding how clinics and hospitals already work. A study with four big U.S. healthcare systems showed that careful study of current workflows is key to adding new technology without causing problems.
Hospitals and clinics that tried to add Electronic Health Records (EHR) along with AI found that simply copying old ways into digital form isn’t enough. For example, poor communication systems and too much paperwork could cost one hospital up to $12 million and $36 million a year in lost work. Clinics need to look closely at how they work now and change processes to fit new digital tools before using AI.
Some important steps include:
Michael Lazor, who led some projects like this, said that combining workflow study, training, and open communication helps reduce resistance and makes results better over time.
The biggest help AI can give is automating simple, repeated tasks that take up doctors’ and staff time. AI-powered automation cuts down mistakes, speeds up communication, and lets healthcare workers focus on harder patient care tasks.
For example, cloud-based Clinical Decision Support (CDS) tools like EvidencePoint work separately from the main EHR systems. This makes them easy to use and grow. These tools give doctors real-time data help and make decisions easier without stopping normal work. Rapid use of these tools helped during urgent times, like the COVID-19 pandemic, showing that AI can be added in quickly.
Denver Health made a big change by turning off 120 systems and moving to one integrated Epic EHR platform. This helped share data better, cut login times for doctors from 30 seconds to 5 seconds, and improved workflows. AI also helped with doctor burnout by automating routine tasks and giving smarter alerts.
These examples from U.S. hospitals show how combining and automating systems can boost work by up to 40 percent and lower costs.
Based on studies and expert advice, leaders and IT managers in U.S. clinics should think about these steps:
The key to using AI well in healthcare is teamwork between people and technology. Curtis Langlotz from the Radiological Society of North America says AI should help doctors, not replace them. Good AI adds to human judgment by giving faster and more correct data to support decisions.
Hospitals like East Alabama Medical Center showed that involving staff early, support from leaders, and fitting AI tools into current workflows can improve patient care and make doctors happier.
Also, AI can help with staff shortages by making work more efficient. This is becoming important as U.S. healthcare sees more patients and more doctor burnout. Teaching future health workers how to use AI will become a normal part of training.
For medical leaders and IT managers wanting to add AI to their clinics, workflow automation should be a main focus. Practical AI uses include booking appointments, answering routine patient questions, handling front-office calls, and helping with paperwork.
Simbo AI is a company that automates front-office phone tasks. Using AI for phone triage and patient communication lets clinics save staff time for higher-value work like patient support and follow-up. By connecting with current management systems, AI answering services reduce missed calls, speed up responses, and improve patient experience.
Good AI automation also cuts manual mistakes and lightens admin workload. This helps daily operations and lowers staff burnout, which is a growing problem in U.S. healthcare.
To get the best results, leaders should:
By focusing on these points, U.S. medical practices can use AI without causing workflow problems, cut operating costs, and give patients better care.
Ensuring effective AI use in healthcare takes detailed knowledge of workflows, smart planning, and ongoing work between people and technology. U.S. clinics that develop full integration plans can use AI to improve efficiency, reduce burnout, and improve patient care while lowering interruptions.
AI technologies require approvals like FDA clearance in the U.S. or CE mark in Europe, but current standards mainly focus on accuracy rather than improving patient outcomes.
The study found that while Google’s AI was accurate in lab settings, it struggled in real-life environments, highlighting that context is crucial for effectiveness.
Google’s AI tool aimed to screen for diabetic retinopathy, drastically reducing the time needed for diagnosis from potentially weeks to minutes.
Challenges included high levels of image rejection due to quality issues and poor internet connectivity, leading to frustrations among nurses and patients.
Nurses experienced mixed feelings; while AI sped up some processes, it also led to unnecessary follow-up appointments when images were rejected.
Experts like Hamid Tizhoosh highlighted the importance of cautious deployment and warned against a rush in announcing AI tools without healthcare expertise.
Existing rules set by regulatory bodies do not require AI systems to demonstrate an improvement in patient outcomes, which experts argue should change.
While the AI had the potential to enhance efficiency, it also disrupted workflow by requiring high-quality inputs that were often not met in real-world conditions.
If AI is tailored properly, it can significantly enhance the capabilities of skilled healthcare professionals and improve patient experiences.
The potential for backlash exists if AI tools fail, as poor experiences with AI could undermine trust and acceptance among healthcare professionals and patients.