Implementing AI in clinical settings takes many steps. It starts with an idea and goes all the way to full use. The Department of Biomedical Informatics at Columbia University Vagelos College of Physicians and Surgeons hosted a workshop about the lifecycle of AI projects in healthcare. They said it is important to involve many people from the start to make AI tools work well and last.
Healthcare administrators and IT managers in the U.S. need to know that using AI is more than getting new technology. It also means fitting AI into clinical work and current operations. The AI project lifecycle covers idea creation, getting leadership support, handling technical needs, following rules, keeping users involved, and watching how AI performs.
Getting leadership involved early helps solve governance and legal questions about AI. In the U.S., healthcare must make sure AI follows federal and state laws about data privacy, patient consent, and cybersecurity. Responsible AI governance means setting rules before and after AI use, being clear about how AI works, and thinking about risks like doctor responsibility.
The American Health Information Management Association (AHIMA) said healthcare should deal with ethical and legal AI issues early, even before all rules are made. This helps avoid legal problems and keeps patient trust.
AI tools can automate many tasks in healthcare. The AHIMA Virtual AI Summit in 2025 showed how AI works like an “invisible workforce” doing jobs such as scheduling, billing, coding, and checking documents. This lowers manual work and mistakes.
For medical practice administrators and IT managers, AI helps use resources better. Staff can spend more time caring for patients, not paperwork. For example, ambient documentation tools can make notes from doctor-patient talks, so doctors don’t have to write everything manually.
Health information professionals watch these automated tasks to keep documentation good, legal, and accurate. They work with AI tools based on large language models to help review, code, and support decision-making.
Healthcare groups planning AI use must keep training their staff. David Marc, PhD, CHDA, of AHIMA, said health information experts need AI knowledge to work well with AI systems. Training helps staff feel confident using AI for notes, coding, and admin work.
Medical practice owners and managers in the U.S. should plan ongoing education on AI. This helps staff learn how AI affects billing, workflow speed, and following rules.
In U.S. medical offices, these AI tools can improve patient safety and personal care. This is important for managers balancing patient health and office work.
AI models can be unfair if training data and design are not watched carefully. The workshop said tech teams need tools to check AI for bias so results are fair. This is very important in healthcare because unfair AI can harm patients.
Hospitals and clinics must follow ethics set by groups like the American Medical Association (AMA). AMA says AI should help doctors, not replace them. AMA wants clear information about AI use and policies that protect patient data and doctor responsibility.
Doctors must accept AI for it to work well in clinics. AMA research shows more U.S. doctors use AI: 66% in 2024, up from 38% in 2023. About 68% say AI helps their work, mainly by supporting clinical decisions, diagnostics, and admin duties.
The idea of AI as a “co-pilot,” supported by AMA, means AI helps with repeat work and improves information flow. This lets doctors focus more on patient care.
Artificial intelligence can change healthcare in U.S. medical facilities. Success depends on careful planning, working with all involved, and following ethics and laws.
Healthcare leaders should look beyond technology to include human, financial, and legal parts. This will help AI support better patient care and office work. With more doctors using AI and better training, U.S. healthcare can use AI widely, leading to better patient results and smoother admin tasks.
The workshop focused on guiding researchers through the implementation process of AI tools in clinical settings, covering the life cycle of an AI project, involved stakeholders, and strategies for successful deployment and sustainability.
The workshop targeted clinicians and scientists who are beginning AI projects and are looking to deploy them in clinical settings, with specific emphasis on those who need to finalize their deployment plans.
The workshop covered initial and ongoing engagement with executive leadership, technical stakeholder considerations, and strategies for engaging clinical end-users throughout the deployment process.
Engagement with executive leadership is essential for securing support, understanding financial implications, and addressing shared governance considerations, all of which are crucial for successful implementation.
Key considerations include compliance with legal standards, data privacy regulations, and ensuring that governance frameworks are established prior to, during, and after deployment.
Considerations for technical stakeholders included addressing computational needs, ensuring data availability, and implementing a bias auditing framework for fairness and model monitoring.
Effective engagement can be achieved through identifying workflows, providing training on the AI tool, and maintaining ongoing communication to ensure sustained engagement post-deployment.
Participants were advised on content and communication strategies necessary for building a network of clinician champions and end users who can advocate for the AI tool’s use.
The breakout groups focused on executive leader stakeholder engagement, technical stakeholder considerations, and engagement strategies with clinical end-users, each addressing unique aspects of AI project implementation.
Example use cases presented included implementing an early warning score for patient deterioration and a multi-site tool for precision breast cancer prevention, showcasing practical applications of AI in healthcare.