On October 30, 2023, President Biden signed an Executive Order that aims to create rules for using AI safely across different areas, including healthcare. The order gives federal agencies directions to control AI use and lower risks, while encouraging responsible development.
The Department of Health and Human Services (HHS), which regulates healthcare, must set up an AI Task Force within 90 days of the order. This group will make a plan within a year to guide AI use in healthcare. Their plan will cover safety, fairness, privacy, and openness. The order also tells HHS to start an AI Safety Program by October 2024. This program will help find, track, and fix clinical AI mistakes to keep patients safe.
These federal rules are a step toward controlling AI tools, especially those that use private health information or affect medical decisions. The goal is to prevent harm, protect privacy, and make sure AI is fair in healthcare settings.
Healthcare managers and owners must get ready to follow the rules in the Executive Order and other laws like HIPAA. HIPAA already sets strict limits on handling private health information (PHI). Using AI adds new challenges in managing this data.
The Executive Order focuses on several main rules:
The HHS AI Task Force and Safety Program will put these ideas into action. Healthcare groups should expect more advice from the Office of the Inspector General (OIG) about following AI risk rules in their specific areas.
HIPAA’s Privacy and Security Rules are important for protecting patient data. Adding AI requires changes to meet these rules under new situations. HIPAA’s Security Rule has three types of protections that apply to AI:
AI often needs to use a lot of PHI to work well. But HIPAA requires the “minimum necessary” rule, meaning AI can only use the data it really needs. Tracking how AI uses PHI is very important to follow the law and avoid accidental exposure of private information.
Also, AI brings new cybersecurity risks. Hackers might fool AI or use AI to launch stronger cyberattacks. IT managers in healthcare must learn these dangers and put in stronger protections.
Besides privacy and security, being open about AI use is important. Healthcare groups will need to clearly say how AI is used, including:
Labeling AI-made content helps patients and staff understand what technology is involved in their care. This openness can also lower legal risks related to unfair practices under the Federal Trade Commission (FTC) Act, which holds healthcare groups responsible for wrong uses of personal data or false claims about AI.
As rules develop, keeping good records and doing audits of AI systems may become normal practice. This includes listing all AI tools, what they do, and checking risks related to PHI and clinical effects.
The National Institute of Standards and Technology (NIST) released an AI Risk Management Framework (RMF) in 2023. This gives healthcare groups a clear way to handle AI risks with four main steps:
NIST’s AI Playbook also gives practical advice that fits well with federal rules from the Executive Order. By using NIST’s framework, clinic managers and IT staff can watch AI closely, lower legal risks, and get ready for new government rules.
One of the fastest ways AI affects healthcare is in phone automation and answering systems. Companies like Simbo AI work in this area. Front-desk tasks such as scheduling, appointment reminders, and patient questions rely on good communication. AI can help here by:
This improves how clinics run by letting staff focus on harder tasks instead of simple calls. It also lowers human mistakes in taking information or making appointments, helping patient happiness and cutting missed visits.
But AI tools in front offices must still meet safety and openness rules mentioned before. Many managers forget that automation tools need good AI risk checks to avoid data leaks or mistakes with patient info. Compliance teams must cover not just AI in clinics but also these office systems.
The Executive Order for safe AI use encourages clinics to add these tools carefully, keeping patient data safe and following rules.
The Executive Order sets the base rules, but new laws are coming to control AI more. For example, the Artificial Intelligence Research, Innovation, and Accountability Act and AI Labeling Acts may require clinics to report on AI use and follow standards like NIST’s RMF.
Healthcare leaders should take early steps, such as:
As AI becomes part of care and admin work, managing risks and following new standards will be key to keeping patient trust, avoiding fines, and improving service quality.
The Executive Order signed in 2023 is an important moment for AI rules in healthcare. It asks the Department of Health and Human Services to make plans that balance new technology with patient safety, privacy, and fairness. For owners, administrators, and IT staff, using AI means following strong rules, handling data openly, and keeping human checks on AI results.
All healthcare groups using AI—including companies like Simbo AI—must change how they work and follow laws to fit these new federal rules. The changing legal environment means acting early to handle AI risks, protect patient data, and treat patients fairly across the United States.
This increasing focus on AI rules will keep shaping how AI is used in all kinds of healthcare places. Clinic leaders should stay aware of changes in AI laws and tools and start using best practices before they become required. With clear rules and openness, healthcare groups can use AI safely to improve patient care and office efficiency.
AI is increasingly integrated into health care compliance programs to help organizations comply with evolving laws and regulations. It requires monitoring evolving AI-specific standards, assessing risks, and adjusting compliance frameworks to manage emerging legal and operational risks associated with AI.
Executive Order No. 14110 sets guiding principles for federal oversight of AI, emphasizing safety, security, transparency, governance, and non-discrimination. It mandates HHS to establish an AI Task Force for AI regulation in health care by 2025, compelling health care entities to integrate these principles into their compliance programs.
AI interacts heavily with large amounts of data, challenging existing HIPAA privacy controls, especially the minimum necessary standard. AI tools require strong safeguards to segment access, ensure lawful processing, and prevent inappropriate disclosures, while also addressing AI-enhanced cybersecurity threats targeting PHI.
Transparency mandates health care organizations to clearly understand and disclose AI’s data collection, processing, and prediction methods. Labelling AI-generated content is also anticipated, enabling consumers and stakeholders to identify AI involvement, promoting trust and informed decision-making.
Effective AI governance includes establishing policies, evaluations, and human oversight mechanisms to control AI throughout its lifecycle. Responsibility for AI management should be clearly assigned within organizations to ensure accountability and continuous compliance with regulatory and ethical standards.
NIST’s 2023 AI Risk Management Framework offers a structured approach—Govern, Map, Measure, Manage—to help organizations identify and mitigate AI risks. The accompanying Playbook provides actionable steps aligning with federal principles, assisting health care entities in proactive legal risk management related to AI.
HIPAA restricts permissible uses and disclosures of PHI, requiring organizations to map and control AI’s data access. The Security Rule demands administrative, physical, and technical safeguards, challenging traditional controls due to AI’s complexity and emerging cybersecurity threats actively leveraging AI itself.
Section 5 prohibits unfair or deceptive acts, exposing health care AI applications to liability if they misuse personal information or make misleading claims. AI-related data breaches and impermissible data training practices may trigger enforcement under Section 5 and the FTC’s data breach rules.
Pending laws like the Artificial Intelligence Research, Innovation, and Accountability Act and AI Labeling Acts will likely require transparency reports, formal compliance with frameworks like NIST RMF, and mandatory disclosures of AI-generated content, reinforcing regulatory scrutiny on AI in health care.
Organizations should inventory all AI uses, conduct data mapping and risk assessments, educate staff on evolving AI regulations, and adapt compliance plans accordingly. Early legal consultation and integrating AI governance into overall compliance infrastructure are key to managing AI-related privacy and security risks effectively.