Artificial Intelligence (AI) is becoming a bigger part of healthcare in the United States. Medical practice administrators, owners, and IT managers now need to use AI in ways that are responsible and follow ethical rules. Because AI is changing fast, new frameworks help organizations keep patients safe, protect privacy, and keep operations running smoothly.
This article shares useful ways to connect new AI risk management frameworks with what healthcare organizations already do. It also points out best practices for responsible AI management and follows U.S. healthcare rules. Finally, it gives advice on using AI to improve front-office tasks while making sure it is used ethically.
The U.S. National Institute of Standards and Technology (NIST) created the AI Risk Management Framework (AI RMF). This gives a structured and voluntary way to handle AI risks. It first came out on January 26, 2023, and was updated with a generative AI profile in July 2024. The AI RMF tries to build trust in AI by focusing on safety, privacy, fairness, and clear communication.
The framework helps healthcare organizations to:
The AI RMF asks organizations to make custom profiles based on their own operations, legal rules, and ethical needs.
Besides NIST, international standards like ISO/IEC 42001 (published in December 2023) provide a similar framework. It focuses on ethical AI use with management systems. This standard stresses leadership responsibility, controls in operations, ongoing risk checks, and continuous improvement during the AI lifecycle.
ISO/IEC 42001 is not used much yet in U.S. healthcare. But its method of making AI governance part of the organization’s culture could become important as rules develop. Both NIST and ISO try to align AI practices with new laws, like the European Union’s AI Act and U.S. privacy rules such as HIPAA.
Even with these frameworks, adapting AI governance to what organizations already do comes with challenges:
Healthcare organizations already have risk management and compliance through HIPAA, cybersecurity, and quality checks. These can be the base for adding AI-specific steps.
For example:
Organizations should record AI systems like they do medical devices or health records. This involves using model cards and audit logs, as suggested by ISO/IEC 42001.
Yuval Abadi, who writes about ISO/IEC 42001, suggests a governance team made up of members from IT, legal, compliance, operations, and clinical areas. This team should:
A good AI governance team helps enforce policies evenly and react faster to new AI risks. For example, the legal team makes sure AI follows privacy laws while IT handles AI security controls.
Good AI governance includes actions for transparency, fairness, and responsibility:
These steps should be part of policies, build trust with staff and patients, and include ongoing checks and adjustments.
The NIST AI RMF Playbook gives a simple plan for customizing AI risk management:
A well-used AI RMF playbook builds trust and shows commitment to ethical AI use, which is very important for healthcare providers.
Healthcare faces more rules about AI. The FDA controls medical AI devices, but admin AI tools must also follow privacy and anti-discrimination laws.
Matching AI governance frameworks like NIST AI RMF and ISO/IEC 42001 with these rules helps organizations stay ahead in compliance and avoid fines seen elsewhere.
For administrators and IT managers, using AI in front-office work must balance efficiency with risk control. AI is now used for calls, scheduling, patient questions, and simple data collection.
Key governance points are:
Some companies, like Simbo AI, show how to combine these principles with AI phones to reduce admin work without ignoring ethics.
Administrators thinking about AI should work closely with vendors. They should get detailed reports, proof of bias controls, and monitoring systems like those in NIST AI RMF.
A big problem in healthcare AI governance is lack of expertise inside organizations. They need ongoing training across teams to raise awareness about AI risks, ethics, and rules.
Working with outside experts or special training programs helps fill knowledge gaps. Leaders must support these efforts and give resources to encourage responsible AI management.
Automated tools and platforms to support AI governance are becoming more important. Tools helpful for medical practices include:
Systems like Lasso support ISO/IEC 42001 compliance and reduce manual work. This is useful for healthcare groups with limited resources.
Matching new AI risk frameworks with existing practices helps U.S. healthcare providers use AI responsibly. By building on current compliance and risk programs, creating teams across departments, using fair AI principles, and customizing frameworks like NIST AI RMF and ISO/IEC 42001, organizations can manage AI risks well.
Using AI in front-office automation needs careful attention to privacy, openness, fairness, and dependability. Investing in training and using governance tools lets medical groups handle changing rules while improving operations.
With planned strategies and constant oversight, healthcare administrators and IT managers can make sure AI helps patients and staff while following laws and ethical standards.
The AI RMF is designed to help individuals, organizations, and society manage risks related to AI. It promotes trustworthiness in the design, development, use, and evaluation of AI products, services, and systems through a voluntary framework.
It was created through an open, transparent, and collaborative process involving public comments, workshops, and a Request for Information, ensuring a consensus-driven approach with input from both private and public sectors.
The AI RMF was initially released on January 26, 2023.
NIST published a companion AI RMF Playbook, an AI RMF Roadmap, a Crosswalk, and Perspectives to facilitate understanding and implementation of the framework.
Launched on March 30, 2023, this Center aids in implementing the AI RMF and promotes international alignment with the framework, offering use cases and guidance.
On July 26, 2024, NIST released NIST-AI-600-1, a Generative AI Profile that identifies unique risks of generative AI and proposes targeted risk management actions.
No, the AI RMF is intended for voluntary use to improve AI risk management and trustworthiness.
It builds on and supports existing AI risk management efforts by providing an aligned, standardized framework to incorporate trustworthiness considerations.
NIST provides a public commenting process on draft versions and Requests for Information to gather input from various stakeholders during framework development.
The goal is to cultivate trust in AI technologies, promote innovation, and mitigate risks associated with AI deployment to protect individuals, organizations, and society.