In recent years, several U.S. states have passed laws about using AI in healthcare. These laws focus on making sure AI use is clear and that humans keep oversight. The laws recognize that AI can help work faster but can also cause risks like bias, wrong decisions, and no one being held responsible.
Human Review and Limits on AI Decision-Making
- California’s Senate Bill 1120 (SB 1120), starting January 2025, is a major state law about AI in healthcare coverage. It says that AI alone cannot make the final call on medical necessity in utilization review and management. Instead, licensed doctors or qualified healthcare workers must check and approve these decisions.
- SB 1120 also requires AI tools used in these steps to rely on specific clinical data from the patient’s own medical history, not just general group data. This helps avoid unfair or wrong denials based on broad data.
- Health plans and disability insurers must inform providers, patients, and regulators if they use AI in utilization review policies, making these processes more open.
- Other states like Connecticut (HB05590), Illinois (HB5918), Indiana (HB1620), and Tennessee (SB1261) have similar laws. They require good human oversight of AI decisions and say patients must be told when AI is part of their care or coverage decisions.
Disclosure to Patients and Insured Individuals
- Laws such as Indiana’s HB1620 and California’s Assembly Bill 3030 (AB 3030) stress telling patients when AI helps create clinical messages or makes decisions about their care or coverage.
- Under AB 3030, healthcare providers and insurers must clearly say when generative AI produces clinical information. They must also tell patients how to contact a human clinical provider for further questions or talks.
- This kind of openness helps patients know what is happening. It can stop confusion or frustration when AI creates messages or decisions.
Anti-Discrimination and Accountability Measures
- Laws also try to stop AI from causing bias or discrimination. For example, Tennessee’s SB1261 says AI in utilization reviews must use individual patient data to avoid general rules that might harm patients unfairly.
- Connecticut’s SB00002 requires groups using high-risk AI to have policies that find, record, and reduce algorithm bias.
- In California, the Department of Managed Health Care and the Department of Insurance will check AI-driven utilization review systems regularly to make sure they follow anti-discrimination rules, protect privacy, and stay accurate.
Legal Accountability for AI Decisions
- California’s AB 316 stops developers and users of autonomous AI from avoiding responsibility if their AI causes harm. This law makes sure those who create and use AI tools can be held responsible, even when AI works on its own.
- This is important because AI systems sometimes make bad decisions that block patients from care or coverage without a human checking.
Risks Associated with AI in Healthcare Coverage Determinations
AI programs used by health insurers to decide coverage have been linked to more claim denials and delays in care. Studies show:
- UnitedHealthcare’s denial rate for post-hospital care more than doubled between 2020 and 2022 after they started using automated AI reviews.
- About 90% of these denials were overturned when federal law judges reviewed appeals, showing many decisions were wrong.
- Still, less than 1% of patients appeal these denials because the process is difficult, expensive, and takes time. Many patients may not have the resources or health to appeal.
- Current federal rules from Centers for Medicare & Medicaid Services (CMS) say coverage decisions must look at individual patient situations and provide reasons for denials. But these rules do not make clear how AI fits in. They also don’t cover employer insurance plans, which cover many Americans under 65.
Experts like Jennifer D. Oliva argue that coverage algorithms should be controlled by the FDA like medical devices. This would make sure AI tools meet rules about accuracy, fairness, and openness before being approved for use.
Transparency and Disclosure: Why It Matters to Medical Practices and Insurers
For healthcare providers and insurers running medical practices, there are several challenges from the growing need to be open about AI:
- Patient Trust and Autonomy: Letting patients know AI plays a role in their care decisions helps keep their trust. Patients should understand when AI-generated advice or messages affect their treatment or coverage.
- Regulatory Compliance: New laws require healthcare groups to change their policies and communication to include AI disclosures. Not following these laws might lead to penalties or lawsuits.
- Reducing Bias and Discrimination: Using transparency helps healthcare groups find possible AI bias and fix problems.
- Meeting Human Oversight Requirements: Providers must make sure systems allow humans to review AI decisions, keeping doctors involved in patient care.
AI and Workflow Integration in Front-Office and Insurance Operations
AI is widely used to automate administrative tasks in healthcare, especially in front-office phone work and insurance claims processing. Companies like Simbo AI use AI to help with phone answering and patient interactions at the front desk.
Impact of AI-Powered Workflow Automation
- Call Handling and Appointment Scheduling: AI can answer many calls, sort patient questions, and book appointments. This eases the front desk workload and helps patients get care.
- Insurance Verification and Claims Management: AI helps check eligibility, needed authorizations, and claims status. It finds missing documents or errors fast, speeding up insurance work.
- Disclosure in AI Interactions: Since laws require openness, AI phone systems must tell patients when they are talking to virtual assistants. This lets patients know and gives them a chance to reach human staff.
- Improving Accuracy and Reducing Errors: AI helps enter data correctly and sort tasks. When combined with human checks, this lowers the chance of claim denials from mistakes.
- Reporting and Risk Management: AI tools make reports on efficiency, errors, and compliance. These help managers follow disclosure rules and watch AI system performance.
Implementing AI Responsibly in Practice Management
- Train staff about what AI can and cannot do, especially about openness and human checks.
- Create steps for quick human review of AI decisions or messages.
- Often check AI results for accuracy and possible bias.
- Update patient messages to clearly include required AI disclosures.
- Work with AI vendors who follow ethical design and obey state and federal laws.
Key Points for Practice Administrators and IT Managers
To follow rules on AI openness and disclosure, medical practices and healthcare systems should:
- Understand State-Specific Regulations: Laws like California’s SB 1120 and AB 3030 have rules on AI use and disclosure. Other states like Connecticut, Illinois, Indiana, and Tennessee have additional rules. Practices must know and follow the laws where they operate.
- Develop Clear Patient Communication Strategies: Tell patients about AI use in easy-to-understand ways. Add disclaimers on AI-made documents, emails, and phone talks.
- Maintain Robust Human Oversight: Set up workflows so doctors review and make final decisions about care and coverage. AI should only support them.
- Ensure Data Privacy and Security: Protect patient data used by AI systems and follow HIPAA and other privacy laws.
- Prepare for Audits and Reporting: Regulators in states like California ask plans, providers, and insurers to say how they use AI and check compliance often. Be ready with documents.
- Partner with Trusted AI Vendors: Work with companies that understand healthcare AI rules and build transparency, accuracy, and fairness into their tools.
AI Use in Insurance Coverage Decisions: What the Future Holds
The current rules show growing concern about AI’s effect on insurance claims. For example, UnitedHealthcare’s denial rates went up with AI-based reviews, but many denials were overturned when people reviewed them.
Federal rules still don’t fully cover AI risks, so states are making stronger laws for clear use and responsibility.
Future changes will likely include:
- More rules for ongoing checks of AI fairness and how well it works.
- Clearer explanations about how AI systems make decisions, for patients and regulators.
- Stronger legal responsibility for mistakes or harm caused by AI.
- More use of policies to find and fix bias and discrimination risks.
- Required human review as a must in coverage decisions.
Medical practices and insurers in the U.S. should keep learning and change as rules develop to stay legal and keep patient trust.
A Few Final Thoughts
AI’s role in healthcare is growing. Regulators want full openness about AI use, strong human oversight, and ways to prevent bias and harm. Medical practice administrators, owners, and IT managers need to know and apply these rules to meet today’s and future laws while improving patient care and how they work.
Frequently Asked Questions
What legislative measures has Connecticut taken regarding AI use in healthcare?
Connecticut introduced HB05590 to prohibit health insurers from using AI to deny health insurance claims, ensuring AI decisions do not solely determine claims outcomes, thereby protecting consumers from automated adverse decisions without human oversight.
How does Illinois regulate AI systems in health insurance?
Illinois’s Artificial Intelligence Systems Use in Health Insurance Act, HB5918, requires the Department of Insurance to oversee AI use in health insurance decisions, prohibits adverse decisions based solely on AI without meaningful human review, and empowers human reviewers to override AI recommendations.
What disclosure requirements does Indiana’s HB1620 impose regarding AI in healthcare?
Indiana’s HB1620 mandates healthcare providers to disclose to patients when AI is used in healthcare decisions and requires insurers to disclose AI use when making coverage decisions, promoting transparency about AI’s role in patient care and coverage.
What protections are proposed under Tennessee’s SB1261 for AI used by health insurers?
Tennessee’s SB1261 requires AI-based utilization reviews to rely on individual clinical data, prohibits discrimination against enrollees, mandates periodic performance reviews of AI, and offers individuals a private right of action to seek damages if AI use violates the bill.
How does New York regulate AI tools used in state employment decisions?
New York’s Senate Bill 822 requires state agencies using automated employment decision tools to publicly list these tools, describing their purpose and use, and prohibits altering employees’ rights or benefits based solely on AI, enhancing transparency and accountability.
What legal responsibilities does California impose on AI developers regarding autonomous harm?
California AB316 prohibits defendants from avoiding liability for harm caused by autonomous AI they developed or used, ensuring accountability even when AI operates independently, thus closing legal loopholes related to AI-inflicted injuries or damages.
What requirements does New York Assembly Bill 1338 set for AI-generated evidence?
NY Assembly Bill 1338 requires AI-generated or AI-processed evidence in legal proceedings to be supported by independent admissible evidence and proven reliable through qualified expert testimony, raising evidentiary standards for AI-produced information in courts.
How are AI tools regulated in housing pricing decisions in some states?
States like New York, Kentucky, Maryland, and New Hampshire have proposed laws banning the use of algorithmic tools to set or adjust residential rent amounts, preventing potentially unfair or discriminatory pricing based on automated analyses.
What responsibilities do organizations deploying high-risk AI have under Connecticut SB00002?
Organizations using high-risk AI systems must exercise reasonable care to prevent algorithmic discrimination, implement risk management policies, and actively identify, document, and mitigate discrimination risks arising from consequential AI-based decisions.
What transparency obligations does New York A773 impose on banks using AI for lending?
New York A773 requires banks using automated decision tools for lending to perform annual disparate impact analyses, notify loan applicants about the use of AI tools, and disclose the data inputs, ensuring fair lending practices and consumer awareness of AI influence.