Healthcare disparities in the United States disproportionately impact Black, Latino, and other underserved populations. These disparities result from factors such as systemic racism, socioeconomic inequalities, and limited access to quality healthcare resources. AI technologies are increasingly involved in guiding diagnoses, treatment plans, and resource allocation. Depending on their design and use, AI systems can either reduce or increase these disparities.
Research by Obermeyer et al. has shown that some AI algorithms predict outcomes differently based on race, causing unequal treatment. For example, these algorithms contributed to nearly five times more racial disparities in pain management than traditional clinical measures. This means that if AI is not properly adjusted, it might provide fewer resources or less intensive pain treatment to minority patients compared to White patients, reinforcing existing inequities.
A key reason for such disparities is the use of race as a variable in AI health algorithms. Although race is often seen as a proxy for genetic or clinical differences, it increasingly is recognized as an unreliable and inappropriate marker because it mixes social and biological factors. Algorithms that include racial data without proper context can divert care and resources toward White patients, unintentionally sustaining systemic racism.
Some professional organizations have started removing race from clinical decision tools. For example, the American Heart Association revised its Heart Failure Risk Score to exclude race. Similarly, race has been removed from estimated glomerular filtration rate (eGFR) calculations and Vaginal Birth After Cesarean (VBAC) tools. These changes aim to create fairer clinical assessments and AI applications free from race-based bias.
Despite risks, AI has the potential to lower racial disparities in healthcare. Studies suggest that if AI systems are designed with fairness in mind, they can help physicians make more objective decisions that reduce unconscious bias. For example, AI-driven approaches to pain management can lessen unexplained racial differences if the data and algorithms are carefully reviewed and adjusted.
About 51% of Americans who see racial bias in healthcare believe AI might help reduce it. AI’s ability to analyze large, diverse datasets allows it to spot patterns that could be overlooked by clinicians and offer treatment plans less shaped by personal biases.
Experts like Robert Pearl argue that AI holds potential to improve health equity through supporting better physician decisions. Frameworks from researchers such as Irene Dankwa-Mullan integrate health equity and racial justice principles into AI development. This approach ensures AI tools are created to serve all racial groups fairly and help address systemic issues.
Practical steps to reduce AI bias include collecting more diverse healthcare data for training algorithms; avoiding race as a proxy in clinical decisions without evidence; embedding principles of racial justice and equity at every stage of AI design, testing, and deployment; and increasing diversity among AI development teams.
Public opinion about AI in healthcare in the United States is mixed. A Pew Research Center study found that 60% of Americans feel uncomfortable with AI being used for diagnosis and treatment decisions. Only 38% expect AI to improve health outcomes. Meanwhile, 57% worry AI might harm the patient-provider relationship, an important part of effective care.
Acceptance varies depending on the AI application. For example, 65% of U.S. adults say they would like AI assistance in skin cancer screening. However, only 31% support AI-guided pain management after surgery, where racial bias has commonly been seen. Additionally, 79% avoid AI chatbots for mental health support, showing resistance to AI in emotionally sensitive areas.
This cautious attitude suggests healthcare administrators and IT managers must carefully select how and where to implement AI. Transparency about AI’s functions, ongoing bias monitoring, and retaining strong human oversight can help build patient trust.
Healthcare practices increasingly use workflow automations to improve service quality, patient experience, and operational efficiency. AI-driven tools are especially useful in front-office tasks like phone automation, patient scheduling, and answering services.
For medical practice administrators and IT managers, integrating AI into front-office functions offers benefits such as:
These AI tools can enhance patient experience by offering reliable and impartial communication at the first contact point. Automating repetitive front-office tasks also frees clinical and administrative staff to focus on more detailed patient care, where human interaction remains important.
Beyond communication, AI workflow tools can support clinical decision-making by processing large datasets in unbiased ways. They can alert providers when clinical protocols vary based on demographics unrelated to medical need. This helps healthcare teams audit and improve processes toward more equitable treatment.
Using AI to reduce racial bias in healthcare comes with challenges, including:
Medical practice administrators, owners, and IT managers face a need for balanced AI use. Integrating AI tools in front-office workflows and clinical decision support requires ongoing oversight to detect and address racial bias.
Recommended best practices include:
Thoughtful AI use within healthcare administration can help reduce racial disparities while improving operational efficiency and patient satisfaction.
Artificial intelligence offers a possible way to reduce racial bias in healthcare treatment but requires careful use. Healthcare leaders and administrators in the United States must guide AI adoption with attention to fairness, transparency, and patient-centered care. By doing so, medical practices can improve the quality and fairness of care for all patients, regardless of background.
60% of Americans would feel uncomfortable if their healthcare provider relied on AI for diagnosing diseases and recommending treatments.
Only 38% believe AI will improve health outcomes, while 33% think it could lead to worse outcomes.
40% think AI would reduce mistakes in healthcare, while 27% believe it would increase them.
57% believe AI in healthcare would worsen the personal connection between patients and providers.
51% think that increased use of AI could reduce bias and unfair treatment based on race.
65% of U.S. adults would want AI for skin cancer screening, believing it would improve diagnosis accuracy.
Only 31% of Americans would want AI to guide their post-surgery pain management, while 67% would not.
40% of Americans would consider AI-driven robots for surgery, but 59% would prefer not to use them.
79% of U.S. adults would not want to use AI chatbots for mental health support.
Men and younger adults are generally more open to AI in healthcare, unlike women and older adults who express more discomfort.