Artificial intelligence in healthcare is mainly used to predict risks, help with diagnosis, and support treatment decisions. AI tools do not replace healthcare workers. Instead, they assist by processing large amounts of data fast and spotting patterns that may be missed.
An example of AI helping patient care is in managing sepsis. Sepsis is a dangerous condition caused by the body’s reaction to infection. Early detection and treatment are very important because sepsis can lead to organ failure or death if not treated quickly.
A study at UC San Diego Health tested a deep learning AI model called COMPOSER (COnformal Multidimensional Prediction Of SEpsis Risk). This model used past electronic health record (EHR) data to guess sepsis risk based on Sepsis-3 criteria. The study showed a 17% relative drop in deaths in hospital from sepsis during five months after using the AI. There was also a 10% rise in following sepsis treatment rules, like giving antibiotics and fluids on time.
These results suggest that adding AI to clinical work can help patients survive. The AI gave sepsis risk scores and important clinical signs to nurses through a Best Practice Advisory (BPA) inside the EHR. This helped nurses and doctors communicate better and respond faster to patients at risk. Also, only 5.9% of the AI warnings were ignored by nurses, showing good use of the system.
However, the study also showed some problems. The benefits from AI were not the same in all hospitals. This difference came from factors like patient types, clinical settings, and how well the AI fit into existing workflows.
Besides sepsis prediction, AI is used in eight important areas that help improve clinical prediction and patient care:
Studies looked at 74 research projects using AI in these areas. Specialties like cancer care and radiology benefit a lot, as they depend on complex images and tough diagnoses. Using AI here improves patient care and makes healthcare delivery more efficient. This is important in the U.S. where hospitals have many patients and costs are high.
Although AI tools can improve patient care, hospital leaders and IT managers in the U.S. must know that AI needs more than just setup to work well.
AI helps not only with clinical predictions but also with automating operations. This can improve patient care and experience. One example is AI using front-office phone systems, like those by Simbo AI.
Medical practice managers, owners, and IT staff in the U.S. can use AI phone automation to:
U.S. medical offices often face staff shortages and rising patient needs. AI phone automation can help by letting doctors and nurses spend more time with patients, which can improve patient satisfaction and care.
When adding AI systems like Simbo AI’s, the facility must check that the AI fits with existing IT, protects data privacy, and trains staff to use the AI well.
The COMPOSER AI model study at UC San Diego Health offers lessons for U.S. healthcare places thinking about AI. The study used a three-step process:
This method, with a team of tech experts, doctors, nurses, and managers, helped the AI be accepted and gave real benefits. The nurse interface was made easy to understand, so users knew why alerts were shown and what clinical signs raised risk scores.
For U.S. healthcare leaders, using a full approach like this is important. Learning methods suited for adults, constant feedback, and support are needed to keep staff involved with AI and get the most benefits for patient care.
Based on studies and experience, healthcare leaders in the U.S. should do the following when using AI tools:
In short, using AI in U.S. healthcare means more than just adding new technology. It needs careful fitting into patient care and admin tasks. When done well, AI can help lower death rates, improve following treatment plans, and raise patient quality of life. Success comes when technology and healthcare workers work closely together, supported by good systems and clear leadership.
Integrating AI aims to improve clinical outcomes by leveraging advanced algorithms to predict patient risks and enhance decision-making processes in healthcare settings.
Clinically relevant outcomes include mortality reduction, quality-of-life improvements, and compliance with treatment protocols, which can reflect the effectiveness of AI algorithms in real-world settings.
COMPOSER (COnformal Multidimensional Prediction Of SEpsis Risk) is a deep learning model developed to predict sepsis by utilizing routine clinical information from electronic health records.
The model was evaluated in a prospective before-and-after quasi-experimental study, tracking patient outcomes before and after its implementation in emergency departments.
The implementation led to a 17% relative reduction in in-hospital sepsis mortality and a 10% increase in sepsis bundle compliance during the study period.
Embedding AI tools into clinical workflows ensures that algorithms are effectively utilized by end-users, facilitating timely interventions and improving clinical outcomes.
AI algorithms may struggle due to diverse patient characteristics, evolving clinical practices, and the inherent unpredictability of human behavior, which can lead to performance degradation over time.
Continuous monitoring of data quality and model performance allows for timely interventions, such as model retraining, ensuring that AI tools remain effective as healthcare dynamics evolve.
Healthcare leaders should evaluate the costs vs. benefits of AI technologies, ensuring they justify the investment required for implementation, maintenance, and integration into existing workflows.
The ‘AI chasm’ refers to the gap between the development of AI models in controlled settings and their successful implementation in real-world clinical environments, highlighting challenges in translation and efficacy.