Artificial intelligence (AI) is becoming an important part of healthcare systems in the United States. It can help improve patient care, reduce paperwork, and make clinical work easier. Many people in healthcare know this. But changing AI from research into tools doctors and nurses can use every day is still hard. This problem is called the “last mile” problem.
The last mile problem means it is hard to take AI results and use them easily in clinics. Even smart AI tools may not be used if they don’t fit well into how healthcare workers do their jobs. If the AI is hard to understand or does not explain its results, health workers may not trust it. This article talks about these problems and looks at research and ideas to help healthcare managers in the U.S. It also looks at how AI can help automate work and improve efficiency.
The cycle of healthcare AI has many steps beyond just making and checking a model. It includes finding needs, designing solutions, putting them into use, checking how well they work, and fixing problems continuously. The last mile problem mostly happens when AI results must fit into real clinical work, and when users like doctors, nurses, and staff must trust and use them.
Research by Thomas Kannampallil, PhD, says a big problem is that “researched models are rarely implemented; implemented models are rarely researched.” This means many AI tools do not move well from ideas into real practice. Several things cause this gap:
The National Academy of Medicine’s (NAM) Healthcare AI Code of Conduct views the last mile problem as a system challenge, not just a technical one. The group suggests working together, clear rules, and accountability to make sure AI tools help reach clinical goals.
User acceptance is key to solving the last mile problem. Adopting AI is not just about the technology working well but also about users being ready and willing to use it. Moustafa Abdelwanis and team studied healthcare workers’ views and found human barriers like poor training, fear of change, and worry about more work. These stop AI from being used fully and well.
Good user acceptance depends on:
Leaders play an important role. New roles like Chief Health AI Officer, found in some U.S. health groups, help guide AI plans, ethics, training, and checking AI progress. This helps build user acceptance.
Healthcare groups face many challenges when adding AI. Connecting AI with systems like electronic health records (EHRs), appointment schedulers, and hospital software needs strong IT help. Health data is complex, privacy is important, and rules are strict, which adds difficulty.
Philip R.O. Payne, PhD, Chief Health AI Officer at Washington University Medicine and BJC Healthcare, says good governance and better technology setups are needed for safe AI use. Without good data management and quality checks, AI may fail or be inconsistent, hurting trust.
In radiology, Panagiotis Korfiatis, PhD, and team show that AI must balance operations, clinical needs, and rules. Hospitals must pick between buying AI solutions with vendor help that may not be very flexible, or making their own AI tools that need strong quality control to keep them safe and effective.
Using AI well usually takes multiple steps:
AI-driven automation is a useful way to handle the last mile problem and make workflows better in U.S. healthcare. For example, AI systems that handle phone calls, like ones from Simbo AI, automate patient calls, schedule appointments, and answer common questions without needing a person.
This kind of automation can help reduce the paperwork and phone task load in medical offices by:
For these AI systems to work, they must fit the specific ways each office works. This means understanding their unique scheduling, patient types, and communication styles.
Ethics and following rules are very important when using AI in healthcare. The NAM Healthcare AI Code of Conduct says AI use must be fair, clear, and responsible.
Healthcare must make sure AI tools do not show bias, protect patient privacy, and follow laws like HIPAA. Hospitals also need to show they have clear AI management to meet these rules, making sure AI is used responsibly.
Training the workforce is a key part of making AI work in healthcare. Training should cover not just how to use AI but also ethics and how workflows change with AI.
More U.S. healthcare groups now have leaders to handle these needs. For example, a Chief Health AI Officer helps:
This leadership helps solve many problems at once, making AI use more lasting.
Radiology often tests AI in clinical work because it uses lots of images and complex diagnoses. Research from Mayo Clinic shows that good AI use in radiology needs clear plans that match clinical needs, IT setup, and rules.
Work issues like workflow interruptions and workload must be balanced. This can help avoid burnout and improve diagnosis accuracy. These lessons can be used in other clinical areas that want to use AI well.
For healthcare managers, owners, and IT staff in the U.S., solving the last mile problem is key to making AI useful. AI must do more than work technically; it has to fit human needs, organizational culture, rules, and ethics. AI must work naturally in daily tasks and be accepted by users.
AI automation in tasks like phone answering shows how AI can be used well by both patients and workers while improving efficiency. Careful planning, leadership, and ongoing training are important to make AI move from experiments to tools everyone trusts.
Using AI well takes teamwork and many steps. Healthcare groups that plan carefully and fix problems early give themselves the best chance to improve patient care and run better with AI.
AI provides patient monitoring via wearables, enhances clinical decision support, accelerates precision medicine and drug discovery, innovates medical education, and improves operational efficiency by automating tasks like coding and scheduling.
Governance ensures safety, fairness, and accountability in AI deployment. It involves establishing policies and infrastructure that support ethical AI use, data management, and compliance with regulatory standards.
Challenges include developing strategic AI integration, modernizing infrastructure, training an AI-literate workforce, ensuring ethical behavior, and addressing workflow and sociotechnical complexities during implementation.
This leader guides AI strategy, oversees ethical implementation, ensures alignment with clinical goals, promotes AI literacy, and manages the AI lifecycle from development to evaluation in healthcare settings.
A code of conduct sets ethical principles and expected behaviors, fosters shared values, promotes accountability, and guides stakeholders to responsibly develop and use AI technologies in healthcare.
Biomedicine’s interdependent, nonlinear, and adaptive nature requires AI solutions to manage unpredictable outcomes and collaborate across multiple stakeholders and disciplines to be effective.
It refers to challenges in translating AI model outputs into real-world clinical workflows, addressing sociotechnical factors, user acceptance, and ensuring practical usability in healthcare environments.
It advances governance interoperability, defines stakeholder roles, promotes a systems approach over siloed models, and strives for equitable distribution of AI benefits in healthcare and biomedical science.
Scenario 1: data growth outpaces model effectiveness; Scenario 2: data growth and model effectiveness grow comparably; Scenario 3: model effectiveness grows faster than data, requiring new data sources for training.
Training clinicians and engineers in AI literacy ensures teams can effectively develop, implement, and manage AI tools, addressing technical and ethical challenges while maximizing AI’s positive impact on patient care.