A big problem in healthcare AI projects in the U.S. is the lack of diversity in the teams creating AI systems and in the data used. The National Institutes of Health’s (NIH) AIM-AHEAD program works to increase diversity among AI and machine learning researchers and improve health fairness. A recent meeting by AIM-AHEAD, which had over 600 people sign up and 374 attend, showed that having different voices in AI research is very important. Without this, AI might make health inequalities worse.
Many AI systems use electronic health records (EHRs) to find patterns and guide clinical choices. But often, the data does not include enough details about things like race, ethnicity, income level, or neighborhood. When important data is missing, AI can accidentally continue unfair treatment and cause unequal health results for some groups.
The AIM-AHEAD meeting said teams making AI must look like the communities they serve. This means including doctors, researchers, community health workers, minority-serving groups, and people from less-resourced places. For example, Indigenous communities want their data handled carefully and fairly, so it is not used wrongly. Krystal Tsosie, a scholar on Indigenous genomic data, stressed how important it is to protect data in AI work.
Stakeholder engagement means including all important people in planning, building, and checking healthcare projects. In AI projects, this means inviting doctors, patients, IT staff, community leaders, and local officials. This wide involvement helps make sure AI tools are useful, fair, and trustworthy.
Nicole Redmond, a leader in the AIM-AHEAD program, said trust is very important for sharing data and using AI well in healthcare. Medical places that build trust with their communities get better data and more patient involvement. AI works better with data that shows real patient needs and experiences.
Many experts say partnerships must respect culture, especially when working with groups that might only get healthcare through telemedicine. These partnerships need respect, openness, and steady communication.
Community involvement in healthcare AI improves how well health programs work. A study on health projects in poor neighborhoods in the U.S. and similar countries found that local leaders and trusted elders help most. When community members feel heard, they join and stay involved, which makes these programs last longer.
For medical managers and IT specialists in the U.S., AI is not just for research; it affects daily work. One key area is automated phone services. Companies like Simbo AI are helping here. AI phone systems can handle scheduling, patient questions, and basic triage. This helps staff focus on harder tasks.
These automations help lower wait times, improve communication, and cut costs. But the AI must understand the community it serves. Language differences, accessibility, and culture all need to be considered. If not, patients might feel left out or treated unfairly.
For example, AI phone systems should support many languages and have clear, simple options. This helps non-English speakers, older patients who are not good with technology, and people with disabilities. Also, the AI should let patients talk to a human when needed. Human contact is still important in healthcare.
When AI is used carefully for routine tasks, it can help make healthcare fairer and more focused on patients. Data from these phone systems also helps improve AI over time. But this must follow strict rules to protect patient privacy and keep trust.
Even with benefits, healthcare AI has problems like training, data quality, and infrastructure. Many doctors do not have formal training in statistics, AI, or data science. This makes working with AI developers hard. The AIM-AHEAD program says we need to teach AI ideas earlier in medical school and keep training doctors.
Older healthcare data often has poor quality. Some important information like race, ethnicity, or social factors is missing or wrong. This makes AI less reliable. The NIH wants better data rules that allow sharing but protect privacy.
Infrastructure is not just tech but also the people and networks needed to make AI work well in healthcare. To fix health unfairness, investment is needed in tech and social systems that support community involvement and training.
Not having enough diverse AI researchers hurts health fairness goals. Talitha M. Washington said we need to create a more diverse AI workforce to avoid biased results. The AIM-AHEAD meeting said it is important to keep diversity going long after the first funding ends.
Trust is very important for AI to succeed in healthcare. It means believing that data is safe, AI is used ethically, AI decisions are open, and patient rights are respected. Nicole Redmond from AIM-AHEAD said trust is key for both getting people involved and having good data.
Some patients and communities, like many Indigenous groups, have a history of medical abuse, so they are often skeptical. Building trust takes ongoing communication, involving the community, and respecting local customs and knowledge. For example, “community-first” approaches include users and stakeholders from the start.
Without trust, AI in healthcare can be rejected or used wrongly. Places that work to build trust see better AI results and health outcomes. This is because they get more accurate data and stronger patient involvement.
Involve Diverse Stakeholders Early: Include doctors, staff, IT experts, and community reps from the start. Local health workers and minority groups can help address community needs.
Improve Data Quality: Work with EHR vendors and IT teams to make sure data covers social factors, race, and ethnicity. This gives AI better, fairer results.
Support AI Training: Provide AI lessons for healthcare staff. Make sure doctors and leaders know what AI can and cannot do.
Ensure Cultural Sensitivity and Accessibility: When using AI tools like phone answering systems, include support for many languages and easy access for all patients.
Maintain Transparency and Communication: Tell patients how AI is used in their care. Highlight privacy and ethical data use to build confidence.
Collaborate for Sustainability: Join partnerships with NIH and others that work to increase diversity in AI research for long-term gains.
AI and machine learning have great potential to improve healthcare in the U.S. But success depends a lot on involving different types of people and building trust. The NIH’s AIM-AHEAD program shows how important diversity and community involvement are for fair and useful AI. Healthcare leaders should focus on good data, cultural respect, and including stakeholders when using AI tools like automated phone systems. These steps help fix bias and create better technology for all patients.
By using these ideas, U.S. medical practices can create AI that respects their diverse patients. This supports fair health care and better community health over time. It helps make healthcare more fair and effective for everyone.
The AIM-AHEAD program, launched by the NIH, seeks to advance health equity and researcher diversity in AI/ML by increasing the participation of underrepresented communities in the field. It focuses on partnerships, research, infrastructure, and training.
Stakeholder engagement is crucial as it ensures diverse voices are included, builds trust, and helps address biases. Involvement from various stakeholders enriches the research process and enhances the relevance of AI/ML applications to underrepresented communities.
Two major themes emerged: 1) research teams must represent diverse voices, ensuring the community served is included, and 2) a diverse team is essential for data sovereignty and protection, fostering trust.
A community-first approach fosters collaboration and addresses real-world needs by involving community members in building algorithms and understanding local health disparities, leading to more relevant and effective AI solutions.
Engagement should include involving minority-serving institutions, educating the public on AI/ML, and ensuring meaningful partnerships that are not extractive, which enhances trust and collaboration.
Training programs should introduce data concepts early in education, actively engage underrepresented groups, and provide support through mentorship. Cultural awareness and structural barriers must be addressed to ensure participation.
Infrastructure encompasses not just technology but also people and social networks. Investments in equitable, user-friendly infrastructure are needed to ensure proper data collection and analysis from all community health centers.
Data collection faces issues like missing social determinants of health, biases in data capturing, and unprepared older datasets. Solutions include enriching datasets and improving methods to address these gaps.
If AI/ML algorithms are developed without diverse input and consideration of underrepresented communities, they risk perpetuating biases that can exacerbate existing health disparities.
Trust is vital for data quality, security, and engagement. Building trust with communities enhances participation in research and ensures ethical handling of data while addressing privacy concerns effectively.