AI tools are now used in healthcare for many tasks like payment processing, scheduling appointments, and helping with some medical decisions. These AI systems promise faster work, better data analysis, and lower costs for hospitals.
But as AI is used more, people worry about oversight. For example, Connecticut State Senator Saud Anwar proposed a law to stop health insurance companies from only using AI to decide if patients get care. This came after reports that Cigna Insurance used AI to deny over 300,000 healthcare payments in just 1.2 seconds each on average during two months in 2022. This quick action raises questions about whether these AI choices really think about patient needs or medical details.
Senator Anwar said that AI denials can focus on company profits instead of patient health. He said patients should have “a human being on the other side” to review their care, not just an algorithm trained to deny claims. Many doctors share this concern. For example, a survey by the American Medical Association showed 3 out of 5 doctors worry that AI could replace good medical judgment, especially for prior approvals.
More than 40 states in the U.S. have made or suggested laws about using AI in healthcare during 2024. These laws try to balance the good things AI can do with the need to watch how it is used and keep things clear. The many new laws show that people think AI in healthcare needs rules, not just new technology.
These bills focus on:
Some healthcare groups support regulating AI to stop problems, while others point out the good side of carefully designed AI systems. Susan Halpin, Executive Director of the Connecticut Association of Health Plans, said health insurers mainly use AI to make operations better and keep humans in control of medical decisions. Halpin said AI helps healthcare delivery by speeding up claims and improving patient interaction.
Besides states, AI in healthcare is also a concern for the federal government and other countries. The U.S. government has started to make rules about AI through laws and executive actions. The AI Labeling Act of 2023 says people must be told when something is made by AI. The Advancing American AI Act tells federal groups like the Department of Homeland Security to make rules for using AI safely.
President Biden’s Executive Order on AI sets three main goals: national security, economic security, and healthcare safety. It also stresses civil rights and clarity, showing the government wants careful AI development.
The European Union has a different approach. The EU AI Act sorts AI uses into risk types—unacceptable, high, limited, and low—and makes strict rules for high-risk AI in healthcare and other areas. For example, medical devices with AI in the EU must meet strong rules to keep patients safe and respect rights. This shows the need to balance new technology with protection.
Around the world, there are calls for common AI rules. The United Nations voted for a plan that asks countries to make AI follow human rights. Almost all countries agreed, but China was hesitant, showing political issues about who controls AI.
Healthcare managers, owners, and IT staff face direct effects from these changing rules and talks. Healthcare is complex and fast, so careful control over AI is important. Some ongoing problems are:
Paul Kidwell, Senior VP at the Connecticut Hospital Association, said that hospital staff must know how AI is trained and how it is watched. Without this, staff might get confused or distrust AI.
AI also affects healthcare through workflow automation. This includes AI handling phones, managing appointments, patient contact, and answering questions. Companies like Simbo AI make phone automation tools for healthcare. These take care of calls and scheduling, so staff can handle harder tasks needing human judgment.
Healthcare leaders and IT managers find these systems helpful because:
Still, it is important to have clear rules about where AI starts and stops. AI should not replace people for sensitive calls needing a personal touch. AI systems must be ready to pass issues on to humans when needed.
Successful AI automation fits well with current systems and includes ongoing training so the technology supports humans instead of causing problems.
The bill proposed by Senator Anwar in Connecticut and similar laws in other states face challenges. Insurance companies, which have much power in making laws, often oppose limits on AI use. They say AI helps save money and run operations better, which helps patients and payers.
It is hard to balance the needs of healthcare workers, patients, insurers, and tech makers. Lawmakers have to find ways to make AI rules that keep care safe without blocking new ideas.
Healthcare managers should watch these changes closely. Knowing about new rules will help them choose the right technology and get ready for changes in how they work and interact with patients.
In the fast-changing area of AI in healthcare, leaders have to balance better operations with patient safety and obeying laws. Teaching staff about what AI can and cannot do, watching new laws, and working well with tech providers will be important steps to use AI responsibly in healthcare.
Senator Saud Anwar expressed concern about AI being used to determine patient care by health insurance companies, stating it can lead to denied care that affects patient access to necessary treatments.
A ProPublica investigation revealed that Connecticut-based Cigna Insurance denied over 300,000 requests for payments using AI, which prompted Senator Anwar to propose legislation to prohibit such usage.
Cigna’s AI system processed prior authorization requests in an average of 1.2 seconds per case, raising concerns about the quality and accuracy of such rapid decisions.
Anwar warned that quick AI denials of care could result in patients suffering needlessly while awaiting essential treatments, affecting their health outcomes.
Halpin stated that while health carriers do use AI, critical decisions remain under human control, which helps maintain accountability in patient care.
An American Medical Association survey indicated that three in five physicians are concerned that AI may override medical judgment and systematically deny necessary care.
Halpin highlighted that responsibly developed AI systems can improve healthcare access, enhance patient engagement, and streamline administrative processes, making care more efficient and effective.
Kidwell emphasized the need for transparency regarding how AI systems are trained and used, stating that understanding their oversight is crucial for healthcare professionals.
As of 2024, at least 40 states have proposed or passed legislation regulating AI, particularly concerning its application in healthcare, signaling a broader push for oversight.
Anwar expects pushback from the insurance industry, which historically wields significant influence over legislative processes, potentially complicating the bill’s passage.