Clinical Decision Support (CDS) systems have been used in healthcare for many years to help doctors by giving evidence-based advice. Traditional CDS are fixed—they use set rules and data that only change if updated by hand.
Adaptive CDS systems use AI and machine learning. They learn from new clinical data all the time and change their advice based on that. This means Adaptive CDS can offer more personalized help that fits each patient’s needs.
The American Medical Informatics Association (AMIA) says Adaptive CDS are systems that change and get better over time by using the latest clinical evidence and patient data. This is a big step from older systems because it allows more flexible decision-making in patient care.
Even though Adaptive CDS systems are being used more and more, the current rules in the United States are not keeping up with these changes. AMIA’s report says the existing laws do not cover all the risks that come with AI systems.
There are two main kinds of Adaptive CDS:
This lack of control for self-made systems puts patients at risk. Dr. Joseph Kannry said that without oversight, there could be bias in the AI and safety problems. Bias happens when AI gives advice that unfairly treats some groups differently or produces wrong results. This is dangerous because patient care depends on correct and fair advice.
Healthcare leaders and IT managers need to know that without clear rules and checks, the quality of decisions might change unpredictably. This could cause uneven care for patients.
AMIA says that being open about how Adaptive CDS works is very important. Transparency means clearly showing how AI models are trained, including:
Without transparency, it is hard to check if the AI’s advice is fair or right. This makes it tough for healthcare workers to trust or know when to use these systems.
AMIA also says having communication standards is necessary to help guide how Adaptive CDS is used. These standards should include clear information about:
This information helps healthcare teams use AI tools well while understanding their strengths and limits.
To deal with these challenges, AMIA suggests creating new groups to oversee AI in healthcare. These might include:
Having oversight at both local and bigger levels can help make sure AI tools are safe, useful, and used the same way across the board.
This oversight would bring benefits like:
Patricia C. Dykes said that the growth of health data and AI means the informatics community must find safe and good ways to use Adaptive CDS.
Carolyn Petersen added that the special opportunities of AI need safeguards right away to protect patients during regular care.
Besides clinical advice, AI like Adaptive CDS can help with workflow automation in healthcare settings. This is important for practice managers and IT teams who want to improve work efficiency and keep good care quality.
AI automation can help in many ways:
Healthcare leaders should see that AI is not just for patient care but also for making work easier. AI automation lowers human mistakes, reduces staff stress, and helps patients get better service.
At the same time, these AI tools must follow strong privacy and security rules. Healthcare managers need to make sure patient data is safely accessed, stored, and transferred.
Today’s healthcare system faces competition for patient satisfaction and tight budgets. Using AI is both a chance and a responsibility.
Medical practice administrators and IT managers should:
Following these steps helps healthcare teams use AI in a responsible way. This can bring benefits while keeping patients safe.
AI, especially Adaptive Clinical Decision Support, is growing fast in the U.S. healthcare system. It can help improve personalized care. Still, we need new policies, good oversight, and clear communication to avoid problems like bias or unsafe care.
The attention of healthcare managers and IT workers is important. When combined with careful AI workflow tools, their efforts will help healthcare groups use AI safely and well in both clinical work and office operations.
The AMIA position paper focuses on the policy framework for adaptive clinical decision support (CDS) systems that utilize artificial intelligence (AI) applications in healthcare.
Adaptive CDS refers to clinical decision support systems that can learn and change their performance over time based on new clinical evidence and data, enabling personalized decision support.
Marketed ACDS is sold to customers and is subject to FDA oversight, while Self-Developed ACDS is created in-house by healthcare systems without regulatory oversight.
The existing policy landscape is inadequate, leaving patients exposed to algorithmic bias and safety issues due to gaps in federal jurisdiction.
Transparency in how Adaptive CDS is trained is crucial for accountability, requiring clear standards for training datasets, model design, and data acquisition.
The AMIA paper suggests establishing communication standards for the intended use, expected users, and operational guidance of Adaptive CDS to aid in evaluation and maintenance.
Oversight ensures that Adaptive CDS achieves safety and effectiveness by managing implementation through consistent systems and controls.
AMIA calls for the creation of new bodies or departments for governing AI implementation and Adaptive CDS, along with Centers of Excellence for testing and evaluation.
The rapid advancement of AI in healthcare necessitates urgent safeguards to ensure safe and effective use of machine learning applications.
AMIA seeks to position itself as the leading organization to execute the policy agenda for the safe and effective use of Adaptive CDS in the U.S. healthcare system.