Healthcare providers in the United States often struggle to give fair care to all patients. Marginalized groups frequently face problems getting good care, proper diagnosis, treatment, and management of chronic illnesses. Factors like income, race, ethnicity, language difficulties, where people live, and disabilities can delay access to needed healthcare.
This unfairness affects not only individual health but also puts extra pressure on healthcare systems and staff.
Artificial intelligence (AI), especially new models called large multi-modal models (LMMs), might help fix some of these problems. AI can study complex data from many sources, like health records, medical images, doctor’s notes, and social information. This helps doctors and healthcare teams make better decisions.
But, using AI well needs careful planning, ethical rules, and good technology support so that it does not make existing problems worse.
AI can look at patient information much deeper and faster than people can. This helps find patterns tied to social and health differences that might otherwise be missed. Researchers, such as B Lee Green from Moffitt Cancer Center, say AI can discover links between health risks and race, ethnicity, gender, income, and environment in groups that are often overlooked.
AI can also help improve diagnosis and create treatments based on a person’s unique biological and social background. This personal treatment can reduce one-size-fits-all care that doesn’t work well for many.
For example, machine learning can identify high-risk pregnant women by checking language, ethnicity, and health records, beyond just symptoms, as shared by doctors like Pooja Mittal.
However, AI depends on the data it learns from. If the data is biased or missing information about some groups, AI may give wrong or unfair results. Some communities are not well represented in health data. Also, there is a risk that doctors might rely on AI too much, even if it’s sometimes wrong, which can make health gaps worse.
To deal with this, the World Health Organization (WHO) gives guidelines for using AI ethically in healthcare. They recommend including different groups in designing AI, checking for bias, and being open about how AI works to keep trust.
Large multi-modal models are types of AI that can work with many kinds of data, like text, pictures, and videos, to give medical advice or help with office work.
The WHO lists five main uses for LMMs:
Used well, LMMs can improve healthcare access and quality, especially where specialists or staff are few. For example, AI can help spot patients who need urgent care or high-risk health issues that might be missed due to limited time or scattered information.
Things like education, income, safe housing, and neighborhood affect health but often are missing from medical records. AI can combine these social factors with health data to offer fairer care.
Still, if AI is trained on data that leaves out or misrepresents groups, it can keep unfairness going.
Experts like Rajkomar and groups like the AI Fairness Project work to find and lower bias in medical AI.
AI developers should work with many people—patients from different backgrounds, doctors, ethics experts, and community leaders—to make sure AI helps all groups and does not make gaps worse.
In U.S. healthcare, office staff and managers spend a lot of time on repetitive tasks like scheduling, paperwork, and managing many health records. This adds stress and leaves less time to care for patients.
AI tools, such as those for phone answering and appointment reminders, can take over simple tasks. This lets staff focus on harder jobs that need human care and judgment.
In clinics with fewer resources, especially in marginalized areas, automation helps make sure patients don’t miss appointments or calls. AI tools that speak different languages and respect culture are important for fair care.
Healthcare leaders like Mohamed Jalloh suggest putting AI profits back into better technology and training. Improving internet and health systems in poor and rural areas is also very important to make AI work well everywhere.
AI will only work if people trust it and it is used with good rules.
Many minority groups worry about new technology because of past bad experiences in healthcare.
Building trust means being clear about how AI works and how patient data is kept safe. It also needs education so doctors and patients know AI supports care, not replaces human help.
Traco Matthews says involving trusted community members and education helps reduce fear about AI.
Developers need to keep checking AI data and protect privacy, independence, and respect for all patients.
AI can make medical information easier to understand for patients with different backgrounds and reading skills.
For example, AI can translate medical terms into simpler language or different languages. This helps patients and doctors understand each other better.
Ezra N. S. Lockhart points out that such tools help patients follow treatment plans and improve health.
AI can also take notes during doctor visits, reducing paperwork and giving doctors more time to listen to patients.
To make sure AI helps and doesn’t hurt, governments must act.
They should fund ethical AI systems, set rules to check AI effects on different groups, and make sure all voices are heard in technology decisions.
Healthcare groups should push for better technology and internet in poor areas.
Lessons from heart health technology show that careful planning is needed to avoid new unfairness.
AI efforts should focus on lowering barriers for marginalized groups.
Administrators, owners, and IT managers in healthcare can shape how AI is used in clinics and offices.
They can help make AI fair by investing in technology, being open about AI use, and including many voices in decisions.
Used carefully, AI can be part of modern healthcare that is fair and personal for all patients while cutting down on useless paperwork and delays.
LMMs are a type of generative artificial intelligence technology capable of accepting diverse data inputs, like text and images, and generating varied outputs. They can mimic human communication and perform tasks not explicitly programmed.
LMMs can enhance healthcare through applications in diagnosis, patient guidance, clerical tasks, medical education, and drug development, thereby improving operational efficiency and patient outcomes.
Risks include the production of false or biased information, lack of quality in training data, ‘automation bias’ in decision-making, and cybersecurity vulnerabilities that endanger patient data.
Governments should invest in public infrastructure for ethical AI use, ensure compliance with human rights standards, assign regulatory bodies for assessment, and conduct mandatory audits post-deployment.
Developers should include diverse stakeholders, including medical providers and patients, in the design process to address ethical concerns and ensure that LMMs perform accurate, well-defined tasks.
‘Automation bias’ refers to the tendency of healthcare professionals and patients to overlook errors made by AI systems, potentially leading to misdiagnoses or poor decision-making.
Transparency fosters trust among users and stakeholders, allowing for better oversight, ethical responsibility, and informed decision-making regarding the risks and benefits of LMMs.
Independent audits help ensure compliance with ethical and human rights standards by assessing LMMs post-release, publishing findings on their impact and effectiveness.
If properly developed and utilized, LMMs can provide tailored health solutions that improve access to care, particularly for marginalized populations, thus lowering health disparities.
LMMs must adhere to ethical principles that protect human dignity, autonomy, and privacy, ensuring that AI technologies contribute positively to patient care and public health.