Bias in AI systems can cause unfair results for patients. This is a bigger problem for minority and underserved groups. A study by Ayokunle Osonuga and others found that AI tools have about 17% lower accuracy for minority patients. This can make health differences between groups worse.
Bias can happen in different ways:
Matthew G. Hanna and his team say that data problems, design choices, and differences in clinical practice must be fixed during AI development and use to keep fairness and safety.
Healthcare leaders and IT managers need to spot these biases first. They should test AI tools on many types of patients and keep checking after the tools are in use.
Cultural competence means giving healthcare that fits patients’ social, cultural, and language needs. If AI tools do not do this well, they might misunderstand patient information or make bad recommendations.
Ways to add cultural competence to AI tools include:
Dr. Kameron C. Black from Stanford University says that cultural competence is key to making AI tools that help reduce health differences and respect patients’ backgrounds.
One problem is the digital divide. About 29% of adults in rural U.S. areas do not have access to AI healthcare tools. Without internet or digital skills, many rural and low-income patients miss out on telemedicine and AI diagnosis help.
Research by Osonuga and others shows telemedicine can cut time to care by 40% in rural places. Yet, many areas lack the tech or training needed. Clinics serving these areas must work on this to make AI fair and useful.
Ideas to close the digital gap include:
Fixing the digital divide is needed so AI helps reduce health differences rather than making them worse.
AI tools need regular checking after they start being used to keep quality and fairness. But 85% of studies on AI and health equity follow patients for less than one year. This leaves questions about long-term effects.
Healthcare leaders should set up regular reviews of AI tools focusing on:
Dr. Matthew G. Hanna and his team highlight the need for full evaluation at all AI stages to keep systems fair, safe, and open.
AI can help reduce administrative work and improve how clinics run. Dr. Kameron C. Black’s studies show AI can handle repeated tasks to ease doctors’ workload and improve patient care.
For medical leaders, good AI integration may include:
By improving workflows with trusted AI, clinics can run better and help reduce doctor burnout. This lets healthcare staff spend more time with patients and less on paperwork.
Ethics are very important when making AI decision tools. Healthcare leaders should think about:
Matthew G. Hanna stresses that ethical checks and following regulations are needed for AI in healthcare. Without these, AI might harm patients or leave out some groups.
To make AI tools that help all U.S. patients, these steps should be used:
Using these ways, healthcare leaders can guide AI to give fair and better care, helping close health gaps.
For administrators, owners, and IT managers in U.S. healthcare, adding culturally competent AI tools takes good planning and supervision. Success means fixing bias early, being open, involving the community, and making sure all groups get fair access.
Working with experts like Simbo AI for office automation and learning more about ethical AI helps create a healthcare space where AI improves both clinic work and patient fairness. Careful use and watching of AI makes it a useful tool to give better, unbiased healthcare to everyone.
Dr. Kameron C. Black is a first-generation Latino physician and clinical informatics fellow at Stanford. His research focuses on virtual care model innovation, agentic AI implementation in healthcare workflows, mitigating bias in clinical decision support tools, data-driven quality improvement, and AI applications in geriatric medicine. He also emphasizes health equity initiatives.
Dr. Black completed his DO at Rocky Vista University College of Osteopathic Medicine, an internal medicine residency at Oregon Health & Science University, and holds an MPH in community and behavioral health from the University of Colorado. He is currently in a clinical informatics fellowship at Stanford focused on healthcare AI agents and workflow automation.
Dr. Black researches the implementation of agentic AI tools that automate workflows, reduce administrative burdens, and enhance clinical decision support. His work aims to alleviate physician burnout by optimizing efficiency and reducing cognitive overload through intelligent healthcare AI systems embedded in clinical settings.
Dr. Black is Epic Systems Physician Builder certified and holds Cosmos Data Science & Super User certifications, including a Cosmos Researcher badge. These skills enable him to work effectively with electronic health records, data science, and AI tool development in clinical environments.
He has clinical experience across academic medical centers, safety-net Federally Qualified Health Center (FQHC) hospitals, and large integrated systems like Kaiser Permanente, providing him a broad perspective on diverse healthcare workflows and challenges.
Dr. Black’s research has been published in journals such as Nature Scientific Data, JMIR, and Applied Clinical Informatics. He actively participates in professional organizations and conferences like the American Medical Informatics Association and contributes to symposiums on AI for learning health systems.
His MPH in community and behavioral health provides insight into health equity and population health, allowing him to develop AI systems that prioritize culturally competent care and reduce disparities in healthcare delivery.
Dr. Black received awards including the Leadership Education in Advancing Diversity scholar at Stanford, Residency Award for Excellence in Scholarship at OHSU, and 1st place in the MIT Hacking Medicine Digital Health hackathon, underscoring his leadership and innovative skills in healthcare AI.
He is an active member of the American Medical Informatics Association and the American College of Physicians and serves on committees for events like the AMIA annual symposium and public health abstract reviews, fostering the dissemination of AI research and best practices.
Dr. Black focuses on agentic AI systems that are transparent and minimize bias in clinical decision support. He advocates for culturally competent AI policies and strives to integrate AI responsibly into healthcare workflows to improve quality and reduce burnout while addressing equity concerns.