Artificial intelligence (AI) programs in healthcare use algorithms trained on large sets of data. These programs help with tasks like looking at scans, predicting disease risk, and managing patient communications. But if the data used is incomplete or biased, AI results can be unfair. This can hurt groups like racial and ethnic minorities, people with disabilities, older adults, and others protected by law.
There are three main sources of AI discrimination:
For example, a 2019 study found that an AI tool in the United States gave lower risk scores to Black patients than to white patients who had similar health problems. This led to fewer health services for Black patients even though their needs were the same or greater. This shows how AI can make existing problems worse.
Other studies support this. For instance, research in dermatology found that AI trained mostly on images of lighter skin often misdiagnosed skin conditions on darker skin. This increased risks for those patients.
The United States has started making rules to prevent discrimination by AI in healthcare. Two important rules affect healthcare providers:
Healthcare organizations must fix or stop using AI tools found to be biased. The HHS OCR can investigate complaints and enforce corrections.
Another key point is that AI systems should be trained on diverse data. The Office of the National Coordinator for Health Information Technology (ONC) asks developers to be open about the data they use. This helps create fairer AI.
Besides obeying laws, healthcare groups face questions about fairness and harm from AI bias. A study in Modern Pathology identified several types of bias in AI models for medicine:
Ethical worries focus on fairness and safety. If AI gives wrong advice for some groups, it can be unsafe and reduce trust. Biased AI can wrongly delay or deny care.
To deal with these issues, healthcare groups need to check AI thoroughly and keep watching it after it starts being used. They should make sure AI helps all patients well, avoid continuing bias, and keep clinicians responsible.
Healthcare managers and IT teams can help reduce bias by doing the following:
One AI use case in healthcare is automating phone calls in front offices. These automated systems can help manage patient calls better. But they must be used carefully to avoid discrimination.
How AI phone automation affects healthcare fairness:
Medical offices using AI for phone automation need to check for bias in these tools. This fits with laws like California’s AI advisory and the federal Section 1557 rule.
As AI becomes more common in healthcare, leaders must deal with challenges to keep patient care fair. Knowing about bias, laws, and ethics can help organizations run good AI systems.
Following laws like California’s advisory and the HHS rule means having clear rules, training workers, testing AI, and being open with patients. These steps help reduce discrimination and improve care and trust.
Combining strong rules with good AI tools, like phone automation, needs attention to fairness and inclusion. Taking action now will help protect vulnerable groups and support equal healthcare across the country.
The advisory provides guidance to healthcare providers, insurers, and entities that develop or use AI, highlighting their obligations under California law, including consumer protection, anti-discrimination, and patient privacy laws.
Risks include noncompliance with laws prohibiting unfair business practices, practicing medicine without a license, discrimination against protected groups, and violations of patient privacy rights.
Entities should implement risk identification and mitigation processes, conduct due diligence and risk assessments, regularly test and validate AI systems, train staff, and be transparent with patients about AI usage.
The law prohibits unlawful and fraudulent practices, including the marketing of noncompliant AI systems. Deceptive practices could result in legal violations if inaccurate claims are made using AI.
Only licensed human professionals can practice medicine, and they cannot delegate these duties to AI. AI can assist decision-making but cannot replace licensed medical professionals.
Discriminatory practices can occur if AI systems result in less accurate predictions for historically marginalized groups, negatively impacting their access to healthcare despite facial neutrality.
Healthcare entities must comply with laws like the Confidentiality of Medical Information Act, ensuring patient consent before disclosing medical information and avoiding manipulative user interfaces.
California is actively regulating AI with several enacted bills, while the federal government has adopted a hands-off approach, leading to potential inconsistencies in oversight.
Recent bills include requirements for AI detection tools, patient disclosures in generative AI usage, and mandates for transparency in training data.
Examples include using generative AI to create misleading patient communications, making treatment decisions based on biased data, and double-booking appointments based on predictive modeling.