Recent projects across U.S. research institutions show more focus on using AI to reduce health differences, especially in places with fewer resources. For example, George Washington University (GW) School of Medicine and Health Sciences and the University of Maryland Eastern Shore are working together on a project funded by an $839,000 NIH grant. This is part of the larger $1.9 million AIM-AHEAD initiative. The goal is to create AI tools that can make risk predictions fairer and easier to understand, especially for heart and metabolism diseases, cancer, and behavioral health.
Dr. Qing Zeng leads the project. They involve community groups like local health organizations and schools. This helps make sure the AI tools meet the needs of different groups such as Black, Latino, LGBTQ+ people, and those with less money. The approach aims to build trust in AI among health workers who work directly with these communities. It also stresses that technology should not increase health differences but help reduce them.
Healthcare leaders need to understand this research. The AI tools will be tested in clinics and with feedback from health workers who serve these groups. If used carefully, these AI tools can give clinicians more accurate and fair predictions, which may lead to better results for patients who have struggled to get good healthcare before.
AI shows potential, but there are important ethical worries about how it is used in healthcare. Studies and experts point to three main types of bias in healthcare AI:
If these biases are not fixed, AI might give wrong or harmful advice. For example, AI could miss a diagnosis or give resources unfairly, which could worsen problems for vulnerable patients.
Ethical issues are not just about bias. Privacy is also a big concern because AI uses sensitive health data. Laws like the EU’s GDPR and the U.S. GINA help protect patients, but some gaps still exist. Patients need clear information about how their data is collected, used, and shared. They should have the choice to say no to AI-based treatments if they want.
Another issue is losing human warmth in healthcare. AI can help with diagnoses or routine tasks but cannot provide the emotional care needed in areas like childbirth, mental health, and child care. Healthcare leaders should find a way to use AI’s benefits while keeping important human contact.
Healthcare leaders must think about how clear AI systems are. Many AI tools work like “black boxes,” where even doctors cannot see how they make decisions. This can lower trust from both doctors and patients, especially when AI choices affect health results.
Explainable AI tries to show how AI comes to its conclusions. This can reveal possible biases or mistakes. When AI is transparent, health teams can check its advice and make better decisions. They can also be responsible if AI causes problems.
Several U.S. organizations stress rules to hold AI makers responsible for unfair or harmful actions. Health providers should pick AI tools that explain their decisions well and keep records, so they can see how patient data affects choices.
AI brings up bigger social fairness questions too. Access to AI healthcare tools often favors cities with many resources over rural or poor areas. This could make health differences bigger unless policies help share AI fairly.
AI might also change jobs in healthcare. Fields like medical imaging, pathology, and nursing could be affected by AI and robots. While AI can reduce mistakes and speed up work, losing jobs might increase economic problems.
Healthcare leaders must balance using AI with plans to retrain workers and keep their spirits up. The White House recently gave $140 million to support responsible AI use. Helping workers through changes is important for keeping healthcare working well in the future.
One useful way AI is used now is in front-office automation. Companies like Simbo AI use AI to handle front-desk phone services. This helps patients get access and makes administration smoother by managing schedules, answering common questions, and directing calls without tiring out staff.
For healthcare managers and IT staff, AI phone automation means shorter wait times, fewer dropped calls, and better communication with patients. This can improve patient satisfaction and make operations run better. However, these systems must protect patient privacy, keep data safe, and let patients reach a person when needed.
Automating admin tasks should also follow ethical rules like fairness and respect for patients. AI should not make it hard for patients who don’t know technology well or have language or access issues. Solutions like multiple languages and a way to speak to a real person help make things fair.
Automation also helps collect data that can improve future AI health tools. When front-office systems record patient talks carefully and ethically, they create data that can make care better while keeping patient information private.
As AI grows in healthcare, medical administrators and IT staff have an important job. They must balance using AI for better care with being ethical, clear, and fair. Doing this can help AI benefit all patients across different U.S. communities without making current unfairness worse.
The AI-FOR-U project aims to develop trustworthy AI tools to address health disparities in under-resourced communities, enhancing fairness and explaining risk-prediction models in healthcare.
The project is a collaboration between the George Washington University (GW) School of Medicine and Health Sciences and the University of Maryland Eastern Shore (UMES).
Qing Zeng, PhD, a professor of clinical research and leadership and director of GW’s Biomedical Informatics Center, is leading the project.
The AI tools will focus on cardiometabolic disease, oncology, and behavioral health, as selected by community partners.
The impact will be evaluated through clinical use cases and by measuring frontline workers’ trust in the AI tools.
The project received a two-year grant of $839,000 under a larger $1.9 million initiative to advance health equity using AI.
Community partners include various organizations serving diverse populations, such as Alexandria City Public Schools and Unity Healthcare.
The project aims to ensure that AI applications do not increase healthcare inequities and improve users’ understanding of AI decision-making.
Community engagement is integral, with input from partners during focus groups, interviews, and surveys to guide tool development.
The project is part of the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD), supported by the NIH.