In recent years, artificial intelligence (AI) has become an important area in healthcare, especially in the United States. Healthcare costs are rising, and more patients need care. Medical groups and health organizations want to improve care while cutting down on paperwork and costs. Using large amounts of different kinds of healthcare data to build AI models offers a helpful approach. These AI models can support medical decisions, make operations smoother, and reduce costs if developed carefully. This article looks at how healthcare providers, managers, and IT staff in the U.S. can use AI trained on varied healthcare data to solve real clinical problems and improve workflows.
Multimodal healthcare data means many types of medical information collected from patients and care settings. This includes medical images like X-rays and MRIs, doctors’ notes or reports, structured records such as lab results and medicine history, time-series data from monitors, and even genetic data. Combining and analyzing these different types of data needs advanced AI systems that can understand complex information from many sources.
Foundation models—large AI models made to work with a wide range of medical data—have shown they can perform well across many healthcare tasks. These tasks include diagnosing illnesses, planning treatments for individuals, and making healthcare operations better. Because foundation models can handle different data types, they can better support the variety of work done in U.S. hospitals and clinics.
But using large multimodal data comes with challenges. Protecting data privacy, following laws, building good technology setups, and making sure algorithms are fair are important issues. Steps must be taken to make sure AI tools are clear, able to work with different health IT systems, and safe for patient privacy.
AI has a big chance to help with clinical decisions. For example, the Center for Healthcare Marketplace Innovation (CHMI) at the University of California, Berkeley, is making AI programs to assist doctors in emergency rooms. These AI tools help improve patient sorting and diagnosis. There are AI models that help doctors check heart attack risk quickly and accurately. This helps urgent care decisions happen faster.
CHMI’s work shows that it’s important to understand both technology and how the healthcare system works, including ethical issues. AI models should be built with clear knowledge of healthcare workflows to work well. The goal is to turn AI research from labs into tools doctors can use every day.
AI coaching tools are also being tested to help doctors make decisions. These tools give support in hard treatment cases, helping to lower mistakes and improve patient care. When used with the right training and workflow setup, AI can become a helpful partner in diagnosis and treatment.
In U.S. healthcare, a large part of spending goes to administrative work. About 15 to 30 percent of healthcare costs come from tasks like scheduling appointments, patient registration, billing, and answering phones. Using AI to automate these front-office and back-office tasks can save a lot of money. It also helps staff spend more time on patient care.
Companies like Simbo AI focus on AI-based phone automation and smart answering services. Their systems handle routine calls, book appointments, and answer patient questions. This lowers wait times and reduces the need for human receptionists. This kind of automation is useful in busy medical offices where managing patient communication and scheduling is a big challenge.
Using AI for common front-office tasks can improve workflow efficiency and patient satisfaction while cutting costs linked to manual administrative work. This technology can also reduce errors and make patient data handling more accurate.
Keeping patient privacy safe is a major challenge when using large healthcare data sets. Laws like the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. set strict rules on handling sensitive health information. This causes tension between needing data for AI research or clinical use and keeping patient details private.
To handle this, health organizations and AI researchers use methods like data anonymization, encryption, and controlled access. These make sure laws are followed. Synthetic data is also used. It is made-up data that looks like real patient information but has no personal details. This lets AI be trained on big, unbiased data sets without risking privacy.
Recent studies show that synthetic data generators using deep learning are now common in healthcare AI work. Synthetic data helps protect privacy and solves the problem of not having enough real data, especially for rare diseases. It can also make AI models fairer by including better examples from different patient groups and reducing bias in AI decisions.
For AI to work well in U.S. medical settings, different healthcare IT systems must share and understand data easily. Interoperability means that electronic health records (EHR), imaging software, lab databases, and AI apps can exchange information clearly.
Foundation models need access to combined data from many platforms to work well in clinical settings. Limited interoperability can stop AI from giving full analyses or advice, making it less useful.
There are ongoing efforts to improve interoperability in U.S. healthcare by using standards like Fast Healthcare Interoperability Resources (FHIR). This promotes consistent data exchange. AI makers, policymakers, and healthcare providers must work together to follow these standards. This helps keep AI useful and safe for patients.
Developing AI in healthcare involves more than computer science. The success of AI depends on teamwork between AI developers, clinicians, economists, policymakers, and health managers. For example, CHMI uses an approach that mixes behavioral economics and ethics with technical progress to build AI tools that match healthcare goals and patient needs.
Testing AI in real clinical settings through randomized trials is important. For example, CHMI’s heart attack diagnosis AI is being tested in real hospitals to see how well it works beyond lab accuracy. This helps find practical problems and make sure AI tools really improve outcomes and reduce costs.
Medical administrators and IT managers in the U.S. should stay informed about this research. They should take part in pilot projects and offer feedback to make AI tools fit their clinical and operational needs better.
Managing workflow in a medical practice is complicated. It includes patient appointments, clinical notes, billing, sorting patients, and talking with patients and families. AI plus automation can change many of these steps to make things run smoother and lower mistakes.
Using these AI workflow tools cuts admin costs by a lot. Since admin work is 15 to 30 percent of healthcare spending, these savings can be used for patient care and technology upgrades.
U.S. healthcare groups need to make sure their staff get training to use these AI tools. Also, new tools should fit current rules and policies. Cooperation among IT teams, managers, and clinical staff is very important when adopting AI.
Healthcare prices in the U.S. are high because clinical costs and admin work keep going up. Experts like Jonathan Kolstad from UC Berkeley’s CHMI say AI could cut administrative costs by up to $250 billion a year if used well.
Using AI models that include many healthcare data types can help improve diagnosis accuracy, patient sorting, and clinical workflows. This lets medical groups provide better care at lower costs. Automating front-office tasks with AI saves money on manual work, and makes services more reliable and accessible.
Healthcare providers, especially medium and large practices, should think about how AI fits their system and patients. Good AI use means balancing technology spending with clear goals. It’s also important to make sure AI tools are fair and follow healthcare rules.
AI changes bring chances for U.S. healthcare groups to improve care quality, boost efficiency, and lower costs. To get these benefits, administrators and IT leaders should:
By carefully managing these points, U.S. healthcare practices can use AI tools that meet their needs, improve patient care, and lower costs in a healthcare system that is always changing.
The center aims to translate cutting-edge AI and behavioral economics healthcare research into real-world advances that improve patient outcomes and reduce medical costs, acting as a force multiplier for technological innovation and economic insights in healthcare.
AI tools can enhance care quality by assisting in patient triage in emergency rooms, diagnosing diseases, coaching clinicians, and reducing administrative healthcare spending, thus allowing more time for patient care and potentially lowering costs by up to $250 billion annually.
Integrating expertise in healthcare economics, policy, clinical research, computing, and behavioral science is essential to develop equitable, ethical AI tools that effectively enhance healthcare delivery and patient outcomes.
Many AI models are developed without a deep understanding of healthcare system complexities and incentives, making it difficult to deploy algorithms that meaningfully change healthcare outcomes or costs in practice.
Obermeyer developed a machine learning algorithm to improve physicians’ diagnosis of heart attack probabilities in emergency rooms and is conducting randomized trials to test its real-world effectiveness beyond academic settings.
Securing multimodal, large-scale healthcare data through partnerships is critical for training effective AI, as research quality and impact depend heavily on the quantity, diversity, and security of accessible data.
By establishing an industry feedback platform, the center enables healthcare providers and stakeholders to communicate their practical problems and needs, guiding researchers to develop relevant, problem-driven AI healthcare solutions.
The center is piloting generative AI models designed to provide clinical coaching to medical professionals, helping improve decision-making and healthcare delivery through AI-assisted support.
Human decision-making insights inform how AI tools are designed and integrated, ensuring these technologies complement clinician judgment and patient behavior to create effective, accepted healthcare interventions.
By fostering interdisciplinary collaboration, providing data access, incorporating behavioral incentives, and partnering with healthcare systems, the center creates a ‘bench-to-product runway’ to translate AI research into practical healthcare solutions that benefit patients and systems.