Recent studies show that healthcare organizations in the U.S. are not all equally ready to use AI. A Microsoft Cloud report found that about 28% of healthcare groups are in the “scaling” or “realizing” stages. These groups have moved past trying AI and are using it regularly to get good results. On the other hand, around 44% are still in early “exploring” or “planning” phases. They are learning what AI can do and making plans for the future.
The speed of AI adoption is mixed. Some hospitals and clinics already use AI tools for diagnosis or operations, but many do not have strong leadership, clear strategies, or enough rules to move forward. This difference depends on factors like the size of the organization, budget limits, AI knowledge, and how hard it is to add AI to their current systems.
An important point is that 14% of healthcare groups admit their AI projects do not show clear benefits. This problem may come from AI projects not matching clinical goals, trouble managing AI after it starts, or not fully understanding what AI can do.
Compared to other areas like financial services, where about 40% of groups are actively using AI, healthcare is slower. This shows the need for healthcare leaders to make better plans and evaluations that fit their needs.
Being ready for AI is not just about buying the latest software. Carissa Eicholz, a Microsoft Cloud Marketing Director, says that success with AI relies on strategy, organization, and culture equally. Healthcare groups need to think about several key parts:
The Microsoft AI Readiness Wizard helps organizations by checking these factors and rating their readiness in five stages: exploring, planning, implementing, scaling, and realizing. Each stage means more use and deeper integration of AI.
Big healthcare groups like the Department of Veterans Affairs (VA) show the difficulties of adding advanced technology. The VA tried to use the Oracle Cerner EHR system, but it caused patient safety problems. Software errors affected how medications were managed.
Since 2018, the VA spent almost $9.4 billion to update EHR systems, including $5 billion on Oracle Cerner at five centers. Despite this, 16 reports from the VA Office of Inspector General showed serious safety issues. Problems like incorrect patient ID transmissions affected about 250,000 veterans and made medication handling harder. Hospitals had to increase pharmacy staff by up to 60%, raising costs and work.
Lawmakers worry these technology problems slow down the VA’s use of AI, even though they are testing over 40 AI applications. Charles Worthington, the VA’s chief technology and AI officer, said it is important to add AI carefully so it helps providers instead of causing more trouble.
This case shows that stable core health IT systems are needed before advanced AI can work well. It also points out the need for testing, staff training, and gradual rollout of AI projects.
One of the first uses of AI in healthcare is automating front-office tasks. AI can help with phone systems, scheduling appointments, and talking to patients. These tools make work easier, reduce mistakes, and improve patient experiences.
Simbo AI is one company that uses AI to automate phone systems. Their AI answering service can handle common patient questions, appointment requests, and give information without a human. This helps healthcare groups in several ways:
Adding AI like this needs careful planning. Workflows must be changed so AI fits smoothly. For example, if AI detects an urgent medical question, there should be a clear way to send it to live staff fast.
These systems also need to keep patient data safe. They must follow privacy laws like HIPAA and use encryption and controls.
For IT managers and administrators, moving to AI automation means:
By automating front-office work, healthcare groups can improve how they run while keeping the personal care patients expect.
Using AI in healthcare brings up important ethical and risk questions. The National Institute of Standards and Technology (NIST) made the AI Risk Management Framework (AI RMF) to help organizations handle risks from AI systems.
The AI RMF focuses on making AI designs fair, clear, and responsible. It helps groups spot possible problems early. The AI RMF Playbook offers steps for deploying AI safely, and the AI RMF Roadmap shows future goals for AI management.
Recently, NIST released the Generative Artificial Intelligence Profile. It deals with risks related to generative AI and provides risk management ideas that fit different organizations.
Healthcare groups using AI should:
The VA’s problems with the Oracle Cerner system show that not paying enough attention to these risks can cause serious safety and legal issues.
Healthcare leaders and IT staff must know that using AI well needs ongoing learning. Programs like Harvard Medical School’s “AI in Health Care: From Strategies to Implementation” provide special training for healthcare leaders, doctors, and programmers.
This eight-week course covers how AI is made and used—from training and testing to launching it—focusing on ethics, reliability, and real-world problems. People in the course work with risk models, wearable data, and bias checks. They also do projects where they create AI-based healthcare solutions that fit an organization’s needs.
These programs help staff understand technology and also how to handle rules, clinical needs, and operations well. Healthcare groups that invest in education are better prepared to pick, use, and manage AI.
For medical practice leaders, clinic owners, and IT managers in the U.S., checking AI readiness is a step with many parts. Starting points include:
The main aim should be to use AI not just as a new tool but to improve patient care, lower the load on clinicians, and make administrative tasks better in a safe and legal way.
With careful planning and good decisions, healthcare organizations in the U.S. can better handle the path to AI use, making sure these tools help both providers and patients.
Lawmakers expressed significant concerns about patient safety issues due to software errors in the Oracle Cerner EHR, which have led to incorrect medication information and staffing increases at VA hospitals.
An error in Oracle Health’s software coding resulted in the incorrect transmission of VA Unique Identifier numbers, which could potentially harm patient safety by affecting medication management.
Since April 2020, the VA OIG has published 16 reports concerning the Oracle Cerner EHR, with nine reports highlighting significant patient safety concerns.
Medical centers have had to increase their pharmacy staffing by 20% to 60% to address software bugs and backlog, resulting in millions of dollars in additional costs.
The coding error could potentially affect 250,000 veterans, exposing them to risks associated with contraindicated medications and allergy-related events.
VA leaders have asserted that they will only proceed with EHR system deployments at sites that are fully prepared, emphasizing ongoing efforts to enhance pharmacy functionality.
Charles Worthington mentioned the necessity of integrating AI solutions into workflows carefully to reduce the burden on healthcare providers rather than adding to their tasks.
Lawmakers voiced concerns that the current issues with the Oracle EHR system complicate the integration of AI, raising doubts about its successful implementation.
Lawmakers questioned whether the upcoming deployment of the EHR at the Captain James A. Lovell Federal Health Care Center should proceed without resolving critical pharmacy software issues.
The difficulties faced in the deployment of the Oracle Cerner EHR project raise concerns about the VA’s readiness to adopt other advanced technologies, including AI in health care.