The Affordable Care Act (ACA) has changed healthcare in the United States, making it easier for people to access care and promoting non-discrimination in healthcare practices. Section 1557 is an important part of the ACA. It prohibits discrimination based on race, color, national origin, sex, age, or disability among healthcare providers that receive federal funding. As artificial intelligence (AI) tools are used more in healthcare, it’s important for medical practice administrators, owners, and IT managers to understand Section 1557’s implications.
Section 1557 applies to traditional healthcare providers and any organization that receives federal funding, including hospitals, health clinics, and insurance providers. The regulations under Section 1557 have been updated to reinforce and expand existing non-discrimination protections. Starting July 5, 2024, healthcare organizations will need to follow new mandates that promote fair treatment of all patients.
These new regulations acknowledge that AI tools and decision-support systems can unintentionally reinforce biases if not properly monitored. Ignoring these biases can lead to unequal access to treatment and create legal risks for healthcare organizations. Thus, healthcare providers must make sure that their AI implementations meet the non-discrimination standards set by Section 1557.
AI can improve healthcare delivery, making it more efficient and enhancing patient outcomes. The opportunities range from AI-driven diagnostics to automating administrative tasks. However, using AI also presents challenges, especially regarding data biases and ethical considerations.
Healthcare providers should understand that the data used to train AI decision-making systems can affect outcomes. If that data has biases, the AI’s conclusions may negatively impact marginalized groups, resulting in discriminatory practices against Section 1557’s goals. Therefore, medical practices must thoroughly evaluate their AI systems to avoid causing unintended harm to vulnerable communities.
Healthcare organizations are required to take proactive measures with AI tools following the updates to Section 1557. By May 1, 2025, healthcare providers need to put strategies in place to identify and reduce discrimination risks linked to AI decision-support tools. This involves several key changes:
Healthcare providers need to focus on language access and communication as part of their compliance strategies. New regulations require meaningful access to services for individuals with limited English proficiency (LEP). Organizations should take these steps:
One of the main challenges with AI in healthcare is the risk of biased outcomes from algorithms used in decision-support tools. The updated regulations under Section 1557 require healthcare providers to identify, assess, and address discrimination risks associated with AI.
Healthcare organizations must regularly analyze compliance gaps to confirm their AI tools follow non-discrimination standards. They should check their algorithms for fairness and transparency and adjust or replace biased systems as needed.
Integrating AI into hospital workflows can improve both efficiency and patient experiences. Automating tasks like scheduling and enhancing patient communication can lead to a more effective healthcare delivery model. However, organizations must ensure that these AI-driven processes comply with Section 1557 regulations. Here are some practices for incorporating AI into workflows:
The changes to Section 1557 about non-discrimination and AI integration require healthcare providers to stay informed and prepared. As regulations evolve, organizations must adapt to meet new requirements.
Healthcare providers should promote a culture of compliance, where monitoring, reporting, and governance practices advance with technology. These measures not only lower risks but also enhance patient trust and satisfaction.
Collaboration between different departments is crucial for managing these changes. Medical practice administrators, clinical staff, and IT managers should work together to ensure that AI implementations meet ethical and legal standards. Regular staff training and interdepartmental meetings can help create a culture of compliance and share responsibility.
AI and technology can improve patient care significantly. However, their use should always align with fairness, integrity, and compliance with Section 1557 regulations.
As artificial intelligence continues to change healthcare, it is essential for medical practice administrators, owners, and IT managers to navigate the regulatory landscape. Following the requirements in Section 1557 allows healthcare organizations to promote equitable care, reduce legal risks, and build trust with patients. The future of responsible AI use in healthcare depends on proactive strategies aimed at mitigating risks and prioritizing equity while engaging effectively with diverse communities.
In 2024, California enacted over 10 AI-related laws addressing topics such as the use of AI with datasets containing personal information, communication of healthcare information using AI, and AI-driven decision-making in medical treatments and prior authorizations.
Section 1557 prohibits discrimination based on race, color, national origin, sex, age, or disability in health programs and activities that receive federal financial assistance.
HHS issued guidance emphasizing that AI tools in healthcare must comply with federal nondiscrimination laws, ensuring that their use does not lead to discriminatory impacts on patients.
The CAIA, effective February 1, 2026, mandates that employers exercise ‘reasonable care’ when using AI in particular applications, signaling a regulatory approach to AI in various sectors.
Fiduciaries should evaluate AI tools, audit service providers, review policies, enhance risk mitigation practices, and provide training to ensure compliance with laws and reduce bias in AI tools.
AI platforms can analyze large volumes of data to identify discrepancies and breaches of fiduciary duty within employee benefit plans, highlighting patterns of bias and inconsistencies in decisions.
Fiduciaries should document due diligence, assess the applicability of Section 1557 to their plans, and stay informed about new AI regulations and legal developments.
Fiduciaries are encouraged to consider obtaining or enhancing fiduciary liability insurance to address potential claims related to the use of AI technologies.
RFPs should include specific AI-related criteria, requiring vendors to demonstrate compliance with both state and federal regulations while adhering to best practices for AI.
The increasing sophistication of AI tools raises scrutiny for healthcare fiduciaries, as potential claimants may use AI to analyze decision-making processes and identify discriminatory practices.