Hospital readmissions happen when patients return to the hospital soon after they leave, usually within 30 days. These readmissions cost a lot of money and sometimes could be avoided. One study says that hospital readmissions cause about $26 billion in preventable expenses every year in the U.S. healthcare system. Readmission rates are used to measure quality, but current ways to predict them have limits, which makes it hard to reduce these cases.
Current risk-prediction models usually use hospital data and work well to predict if a patient might die, but they do not do as well at predicting if a patient will be readmitted. Models for predicting death have a c-statistic higher than 0.8, which means they are pretty accurate. But readmission models often have c-statistics around 0.60, which means they are less accurate.
This difference happens because readmissions come from many causes besides a patient’s condition when they leave the hospital. Factors like the care given after discharge, patient behavior, and social factors often are not found in the data hospitals use.
Most old readmission models look at fixed things like the main diagnosis and other illnesses a patient has when leaving the hospital. But patients with long-term illnesses like heart failure, diabetes, or lung disease often have health issues that change over time. These changes affect their risk of returning to the hospital.
Recent research found that readmissions for chronic conditions often come from different health problems. For example, in one heart failure study, only about one-third of readmissions were caused by heart failure itself. The other two-thirds were because of other health problems. This shows that looking only at the main diagnosis misses much of the real risk for readmission.
Some studies now use new models called trajectory-based deep learning. These models look at how a patient’s medical history changes over time instead of just one point in time. The TADEL model from the University of Delaware studied five years of Medicare data. It looked at a patient’s full health history, including changes in health and insurance. This model had better prediction scores than older methods.
Using these dynamic models helps find patients at risk sooner, especially those with complex long-term illnesses. This allows doctors and nurses to help patients earlier and better.
Another new method in readmission research is multistate analysis. Usually, hospitals use a simple yes-or-no method to see if a patient was readmitted within a time frame, like 30 days. But this misses details about patient paths that can affect their health.
Multistate analysis looks at health events as changes between different stages. For example, it tracks going into the hospital, leaving it, going back for different reasons, and follow-up visits. This method shows more real-life details than simple models.
This method can tell if a readmission was for the same long-term illness or something different. It can also find patients who come back many times. This gives care managers better information to plan personal care.
Though more complex, this method helps find patterns that hospitals can use to decide where to put resources. Researchers suggest using this method beyond heart failure to other chronic illnesses for better care and prediction.
One key factor for good predictions is the quality of hospital data. Models that use detailed lists of illnesses, like the Elixhauser index, do better than simple models. But it is important to adjust these models to match local hospital data. Using weights from outside sources without changes can make predictions worse.
Hospitals should work to improve coding accuracy and document all patient conditions well. High-quality coding helps models work better. But some studies warn that factors useful for predicting death may not work well for predicting readmissions. It is also important to collect data on social and behavior factors that affect health.
Artificial intelligence (AI) is playing a bigger role in predicting readmissions and helping hospital work. Machine learning methods like deep learning are used a lot to study electronic health records. Techniques like recurrent neural networks, attention mechanisms, convolutional neural networks, and graph neural networks help find patterns in patient data over time.
Even though AI shows promise, simple logistic regression models often predict about as well. But AI may provide better model explanations and calibration. AI models are especially good at handling large amounts of patient data that change over time, which static models miss.
AI also helps automate hospital office work. This is important for reducing readmissions because it helps with scheduling follow-up and talking with patients. For example, Simbo AI makes phone automation tools that improve patient contact and office efficiency.
Healthcare leaders can use AI phone systems to send patients reminders about medicine, appointments, or symptoms. Automated calling cuts down errors and frees staff to do harder tasks. Good communication like this can help patients follow discharge instructions and get help sooner if problems appear.
Using better prediction and automation methods can help hospitals reduce readmissions, improve patient care, and control costs.
Research from the U.S. and other countries keeps adding new knowledge to improve readmission prediction. Studies that use Medicare data follow patients for many years to find patterns of readmission and long-term disease care.
Universities like Delaware, Texas A&M, and Arizona have helped this research by creating new prediction methods and testing AI models on large data. Their work shows how national data can make models that fit real patient experiences across healthcare settings.
Hospitals should try to join research groups or share data to get better prediction tools. This can give early access to new methods suited for the U.S. healthcare system.
As prediction models get better, using their results in daily hospital work is still a challenge. Practice administrators and IT managers must make sure risk data helps with care transitions, discharge plans, and follow-up care.
Teams made up of different healthcare workers who have real-time risk info can manage high-risk patients more actively. AI alerts can warn staff about patients who need extra help, like home visits or checking medicines.
Also, automating routine patient outreach with AI phone systems sends consistent messages without adding work to hospital staff. This fits with healthcare goals of patient-focused care, cost control, and measuring outcomes.
By focusing on chronic diseases and patient history, improving comorbidity data, and using multistate analysis, hospital managers and IT leaders can make better readmission predictions. Pairing these with AI communication and automation tools lets hospitals spot risks earlier and act to improve care and operations.
The study aims to derive robust casemix adjustment models from English hospital administrative data to predict various patient outcomes, particularly focusing on readmissions and comorbidities.
The best-performing models showed high discrimination for mortality but lower for first readmission, revealing calibration issues and variability in care quality.
Calibrating comorbidity weights to the specific database used is crucial; using the Elixhauser index with adjustments for dementia was found to be most effective.
The predictive power of readmission models is generally low; however, 30 days serves as a reasonable cutoff for modeling despite limitations in quality improvement.
Predictions are affected by data quality, missing key variables, and variations in care delivery, resulting in lower c-statistics for readmission models compared to mortality.
The study found that machine learning methods did not significantly outperform logistic regression in prediction; however, they provided better calibration.
Incorporating interaction terms with age and specific comorbidities can improve model fit, especially for chronic conditions like heart failure.
The study found that higher coding levels improve discrimination, but analysis was limited to combined data rather than segmented by coding levels.
Future research could extend methods to other chronic conditions and consider more sophisticated approaches like multistate analysis to identify patterns of hospital activity.
Findings have been shared through various channels, including published papers, conference presentations, and collaborations with healthcare charities to enhance public awareness.