Horizontal AI regulations are laws made to cover many industries, not just one. The goal is to have rules that any company can follow when using AI. At first, this seems simple, but these rules miss some important problems in healthcare.
Healthcare in the U.S. has many strict rules because it deals with very private patient information, health risks, and ethical questions. Laws like HIPAA focus on keeping patient information safe. When AI is added, things get more complicated. Horizontal regulations often miss these extra challenges.
The European Union started the Artificial Intelligence Act in 2024. This law tries to balance new ideas with safety and ethics. Still, experts like Hannah van Kolfschooten and Janneke van Oirschot say this Act does not fully meet healthcare needs. It is made to cover many sectors but lacks details needed for healthcare AI risks. This is similar to debates in the U.S. about whether general AI laws like the Algorithmic Accountability Act or California’s Assembly Bill 331 are enough for healthcare AI.
Healthcare AI is different and more complex than AI in other areas for several reasons:
Some U.S. AI laws like the Algorithmic Accountability Act want AI impact checks, mainly to stop bias and discrimination. California’s Assembly Bill 331 needs transparency and impact reviews for AI, including healthcare. But these laws do not work well with existing healthcare laws, causing overlaps or gaps.
Horizontal laws are made for general use and do not think deeply about healthcare’s needs. For example:
Airlie Hilliard, an expert on AI rules, says healthcare needs AI laws that are clear and specific. These laws should stop harm, keep data private, treat patients fairly, and allow the correct use of patient data for care.
For those running healthcare in the U.S., clear AI rules made just for healthcare can help in many ways:
Besides health care, AI matters a lot in the office work of healthcare. This includes phone calls, scheduling, patient questions, and answering services. AI can make these tasks better and faster.
For example, Simbo AI focuses on automating front-office phone tasks. This AI handles routine calls so staff can do more important work. It helps patients by reducing wait times. Calls for appointment confirmations, insurance checks, or urgent requests are handled quickly and correctly by AI. This leads to smoother work and less stress for office staff.
But using AI this way also brings concerns that need careful rules:
Healthcare-specific rules for AI in these areas can help keep patients safe and treated fairly. They also guide healthcare workers on how to manage data, check systems, and control risks when using AI for office work.
The EU’s AI Act, which started in August 2024, shows why we need rules made for each sector. It stresses being clear, responsible, and managing risks, especially for healthcare AI like emergency call handling. Writers like Hannah van Kolfschooten and Janneke van Oirschot say the Act helps safety and new ideas but still needs extra healthcare rules to protect patients better.
In the U.S., lawmakers should notice this. They need to make AI laws that fit with healthcare rules like HIPAA and FDA device policy. Laws should require risk checks, ongoing watching, data safety, and fair AI use.
Healthcare leaders must understand these changing rules. They should act early to use AI that follows the law, especially in patient care and office tasks. Companies like Simbo AI show how AI can help in healthcare offices but must also follow or help create clear, healthcare-specific rules.
By going beyond broad AI rules to health-focused ones, the U.S. can better protect patients, support healthcare progress, and guide providers to use AI safely.
Artificial Intelligence is growing in U.S. healthcare, both in medical care and office work. The current broad AI rules work in some ways but do not fit healthcare’s special needs. From protecting private data to managing risks in patient care and emergencies, healthcare-specific AI rules are needed. They give healthcare leaders clear instructions to keep patients safe, follow laws, and use AI well.
Government leaders should create AI rules that work with existing healthcare laws. Healthcare groups should watch rules closely and pick AI tools that keep data safe, work clearly, and can be trusted. Companies like Simbo AI show the benefits of AI in healthcare offices but also show why strict rules are important to protect patients.
Balancing new technology and clear rules will be important for using AI in U.S. healthcare in the future.
The EU Artificial Intelligence Act is a legally binding framework that sets rules for the development, marketing, and use of AI systems in the European Union, aimed at innovation while protecting individuals from potential harm.
The AI Act entered into force in August 2024.
Healthcare is one of the top sectors for AI deployment and will experience significant changes due to the AI Act.
The AI Act outlines responsibilities for technology developers, healthcare professionals, and public health authorities, requiring compliance with established rules.
The AI Act aims to protect individuals by creating a regulatory framework that ensures the safe and ethical use of AI technologies in various sectors, including healthcare.
The healthcare sector has distinct requirements due to the sensitive nature of health data and the need for patient safety, making specific guidelines necessary.
A horizontal approach may not address the unique complexities of healthcare, thus requiring sector-specific regulations for adequate protection and performance.
The article suggests adopting further guidelines tailored to the healthcare sector to effectively implement the AI Act.
The AI Act will significantly reform national policies by introducing new requirements and standards for AI deployment in healthcare.
The article notes that the AI Act inadequately addresses patient interests, highlighting the need for more focused regulations to ensure their protections.