AI systems in healthcare often use machine learning models to look at large amounts of data and give recommendations. These models depend a lot on training data. If the data does not represent the people the model serves, it can lead to bias. Matthew G. Hanna and his team say that bias in healthcare AI can come from three main sources:
There is also temporal bias. This happens when AI uses old data that no longer matches new clinical practices, technology, or disease trends. Without regular updates, the AI can become less fair and less effective.
In the United States, Medicare Advantage Organizations (MAOs) must follow rules from the Centers for Medicare & Medicaid Services (CMS) when using AI. CMS released the MAO Final Rule and a FAQ memo in February 2024 to explain how AI can be used for Medicare coverage decisions.
CMS says AI can help make coverage decisions, but these decisions must focus on each patient’s specific situation. They should not be based only on large, general data sets. This helps AI support care that fits the individual, not a one-size-fits-all approach.
Patient privacy is very important. Under HIPAA rules, MAOs need to get patient permission before using protected health information (PHI) in AI. They must also use encryption, strong access controls, and data anonymization to keep patient data safe.
CMS requires transparency. MAOs must explain clearly how AI affects decision-making. This includes sharing where the data comes from, the methods used, and any known biases. This helps build trust and lets people know AI’s role in healthcare.
CMS also requires regular audits and checks of AI systems to find and reduce bias. This ensures fairness and prevents discrimination, especially related to demographic factors. This aligns with the Affordable Care Act, which bans discrimination in healthcare.
AI should help clinicians, not replace their judgment. AI recommendations need to be based on evidence and validated in clinical settings to fit well with healthcare providers’ decisions.
To reduce bias and support fairness, medical practices and healthcare leaders should use many strategies from AI development to how it is used:
Good data is the base of fair AI models. Medical administrators should make sure AI developers use data that matches the variety of patients they serve. This means including people of different ages, races, genders, economic backgrounds, and medical conditions.
If data is not representative, AI may work well only for some people and fail for others. For example, an AI tool trained mostly on images of light skin may not detect conditions well on darker skin. This can cause unfair health results.
In addition to including population diversity, data should come from many healthcare settings like city clinics, rural centers, large hospitals, and small practices. This helps capture different care methods and social factors.
When building AI systems, developers need to check which features are used and how the AI is designed. They should avoid including features that cause bias, especially those tied too much to race, income, or similar factors.
Many ethical AI guidelines suggest removing information that can lead to unfair treatment based on race, ethnicity, or gender from the AI’s decisions.
Medical leaders can work with AI developers to ask for clear information about what data and features the AI uses. They should support ongoing improvements that focus on fairness.
After AI is in use, it must be checked regularly. Healthcare changes all the time, so audits help see if AI affects some groups differently or loses accuracy.
Audits should look at clinical results by group, check for wrong positives and negatives, and review how decisions on coverage are made.
CMS rules ask MAOs to keep good records of these audits and fix any bias problems quickly.
AI is a tool to support doctors and nurses, not replace them. Healthcare leaders should have rules that make sure providers review AI suggestions carefully and use their judgment.
Training clinicians on how AI works and its limits helps them use it better and not rely on it blindly. Talking between clinicians and AI developers can improve AI designs too.
Medicine changes, so AI trained on old data can become less fair or useful. Regular updates with new data help lower temporal bias.
Interaction bias can be reduced by watching how users react to AI. Feedback can show if AI is being used wrong or misunderstood. Developers can then adjust AI or give better guidance to users.
AI-driven workflow automation, especially in front-office and admin jobs, can help healthcare work better. Some companies focus on phone automation and AI answering for medical offices.
Automating routine tasks helps staff by reducing their work and making sure patients get steady, on-time service. But medical leaders must make sure these AI systems are fair and follow privacy laws.
For example, when AI schedules appointments or handles patient questions by phone or chat, it must not treat some patient groups better than others. It should respect language and culture differences and avoid unfair actions like cutting short calls or limiting appointments based on who the patient is.
AI automation can also help manage the health of whole groups by sorting calls or finding patients who need follow-up. This must be done fairly to all patients.
AI systems need to work well with existing healthcare IT and keep patient data secure under HIPAA. Using encryption, access controls, and anonymizing data are required when handling sensitive information through automated systems.
For Medicare Advantage and other insurance plans, automation can help by handling coverage questions and prior authorizations in a clear and fair way.
Healthcare administrators, practice owners, and IT managers are important for keeping AI fair. They can do this by:
Working with legal and IT security teams helps make sure that patient data privacy and CMS rules are followed.
Using AI in healthcare is growing and has many benefits. But there is risk of bias. By focusing on diverse data, fair design, clear processes, ongoing checks, and human review, healthcare in the U.S. can be fairer and better for all patients. When AI workflows are used carefully, they can also make operations more efficient while keeping care fair.
Healthcare leaders must be aware and take steps to balance new technology with ethical care.
CMS released a FAQ Memo clarifying that while AI can assist in coverage determinations, MAOs must ensure compliance with relevant regulations, focusing on individual patient circumstances rather than solely large data sets.
MAOs must comply with HIPAA, including obtaining patient consent for using PHI and implementing robust data security measures like encryption, access controls, and data anonymization.
The rule mandates that MAOs disclose how AI algorithms influence clinical decisions, detailing data sources, methodologies, and potential biases to promote transparency.
CMS advises regular auditing and validation of AI algorithms, incorporating demographic variables to prevent biases and discrimination, ensuring fairness in healthcare delivery.
AI-supported systems should assist healthcare providers in clinical decisions while ensuring that these recommendations align with evidence-based practices and do not replace human expertise.
MAOs must follow CMS regulations related to AI in healthcare, including documentation and validation of AI algorithms for clinical effectiveness, ensuring compliance with billing and quality reporting requirements.
Coverage decisions need to be based on individual patient circumstances, utilizing specific patient data and clinical evaluations rather than broad data sets used by AI algorithms.
CMS is cautious about AI’s ability to alter coverage criteria over time and emphasizes that coverage denials must be based on static publicly available criteria.
Obtaining patient consent is vital in respecting patient privacy and complying with HIPAA regulations, ensuring that protected health information is handled appropriately.
Prior to implementation, MAOs must evaluate AI tools to ensure they do not perpetuate or introduce new biases, adhering to nondiscrimination requirements under the Affordable Care Act.