Artificial intelligence (AI) has become an important part of healthcare in the United States. It promises to improve patient care, hospital efficiency, and clinical results. About 65% of U.S. hospitals use AI-assisted predictive models. These models help manage inpatient care, find high-risk patients, and handle scheduling. However, there is a big gap between well-funded hospitals and smaller or rural hospitals in adopting and properly using AI tools. This gap is called the “digital divide.” It affects how AI is used and could cause uneven patient safety and treatment across different hospitals.
This article talks about the challenges that hospitals with few resources face when trying to use AI tools. It also looks at the risks connected to this divide and the chances to improve AI use in all healthcare places, especially hospitals with limited money and staff. It also explains how AI can help automate workflows to reduce work pressure and improve phone communication in medical offices.
Recent research from the University of Minnesota’s School of Public Health gives important data about AI use in hospitals around the country. A 2023 study that looked at over 2,400 acute-care hospitals found that about 65% use AI-assisted predictive models for different purposes, including:
Though many hospitals use AI, the study shows concern about how hospitals check if the AI models work well or are fair. Only 61% of hospitals tested their AI tools for accuracy. Less than half (44%) checked the models for bias.
This lack of thorough checking is more common in critical-access, rural, and smaller hospitals with limited budgets and IT support. These hospitals often buy ready-made AI products made for big, urban, or academic hospitals. Such AI may not fit the types of patients or services in small or rural places. In contrast, richer hospitals or academic centers often create and change their own AI models. They have the money and staff to test AI tools and adjust them for their patients.
Paige Nong, lead author of the University of Minnesota study, says this divide can lead to risks in fair healthcare and patient safety. If AI tools are not tested and adjusted properly, they might give wrong predictions or biased results. This can hurt decisions and results for patients in hospitals with fewer resources.
The digital divide in using AI comes from several joined problems:
Risks from the digital divide in AI are not just technical but affect patient care quality. When hospitals use AI models that are not checked for bias or do not fit their patients, problems can happen:
Paige Nong says hospitals with fewer resources are very at risk because they use ready-made AI that cannot be adjusted locally. If leaders and policymakers ignore this divide, these hospitals might have more trouble giving safe and fair care.
The University of Minnesota study and the Digital Technology Innovation Program suggest several ways to close the digital divide in AI use:
The DTI Program leaders, Dr. Genevieve Melton-Meaux and Dr. Rubina F. Rizvi, support these ideas. They say teams with doctors, IT experts, and administrators should guide digital health projects.
Besides predictive models, AI is also useful for automating front-office and administrative tasks. Workflow automation can help hospitals that have few staff and limited budgets.
Examples of AI workflow automation include:
For hospitals with limited resources, using AI to automate tasks can improve work a lot. Since staff shortage and provider burnout are rising, automating routine work lets staff focus more on direct patient care and clinical choices.
Companies like Simbo AI focus on AI phone automation for medical offices and hospitals. Their AI solutions help with common administrative problems in healthcare:
Medical practice leaders and IT managers can use these technologies to modernize work without needing big budgets to build new AI models.
Closing the digital divide in AI needs teamwork from hospital leaders, policymakers, tech makers, and healthcare workers. Some useful steps today include:
Medical practice administrators, owners, and IT managers in hospitals with fewer resources have a tough but key role. They must use AI carefully to improve patient care without hurting fairness. With good planning, teamwork, and help, these hospitals can add AI tools that fit their needs and help close the digital gap in U.S. healthcare.
By focusing on AI evaluation, workflow automation, and shared technology models, hospitals with fewer resources can move to safer, better, and more patient-centered care using AI. Using AI phone automation like that from companies such as Simbo AI is a practical step to use AI’s benefits in hospitals with limited budgets and staffing.
A study from the University of Minnesota analyzed the use of AI-assisted predictive models in U.S. hospitals, focusing on their adoption, use, evaluation capacity, and biases.
Approximately 65% of U.S. hospitals reported using AI-assisted predictive models for tasks like predicting inpatient health trajectories, identifying high-risk outpatients, and facilitating scheduling.
61% of hospitals reported evaluating their predictive models for accuracy.
Only 44% of hospitals conducted evaluations for bias in their AI-assisted predictive models.
Better-funded hospitals are more likely to evaluate their AI models for both accuracy and bias than those using external models.
The study highlighted a digital divide between financially robust hospitals that can design and evaluate their models, and under-resourced hospitals that purchase off-the-shelf models.
The digital divide poses risks to equitable treatment and patient safety, as models may not be tailored to the unique needs of different patient populations.
Researchers emphasize the need for policies promoting fair AI use, which could include financial incentives, technical support, and enhanced regulatory oversight.
AI models were primarily used for predicting health trajectories, identifying high-risk outpatients, and scheduling appointments.
Paige Nong, the study’s lead author, points out that under-resourced hospitals face challenges evaluating AI models, which could compromise patient safety and equitable treatment.