Healthcare providers are using AI more and more to help with tasks like patient scheduling, billing, virtual health assistants, and clinical decisions. These tools can make work faster and help patients, but they also bring risks such as data privacy issues, biases in the algorithms, mistakes in the system, and security problems.
The NIST AI Risk Management Framework (AI RMF) was created by the National Institute of Standards and Technology in January 2023. It helps U.S. organizations manage AI risks. The framework suggests voluntary, standard steps to improve trust in AI products through all stages — from design and development to use and review. It was made with input from both public and private groups to keep improving risk methods.
In July 2024, NIST also released an update called the Generative AI Profile (NIST-AI-600-1). This document guides managing risks related to generative AI, such as false information and inappropriate content. It is very useful for healthcare providers who use advanced AI assistants or chatbots.
For healthcare managers, following NIST standards means using principles like transparency, responsibility, reliability, and fairness. It also means including ongoing risk checks in daily work and watching AI outcomes to find errors that could harm patient safety.
Security is a major concern in healthcare AI because of sensitive patient data and rules like HIPAA. The HITRUST AI Security Assessment and Certification, launched in November 2024, gives healthcare groups a clear way to prove their AI security is strong.
HITRUST’s framework uses more than 50 trusted sources, including NIST, ISO standards, the 2023 U.S. Executive Order on AI, and Department of Homeland Security rules. This helps healthcare organizations meet many regulations while handling AI risks like adversarial attacks, data poisoning, and software bugs.
One key benefit of HITRUST certification is its focus on putting controls into practice. Independent third parties assess the systems, and a central review process checks the results. Recently, HITRUST-certified systems had only a 0.59% breach rate over two years. This shows the model works well.
Healthcare providers with this certification can lower cyber insurance costs and gain trust from patients, regulators, and technology partners. Stephen Dufour, Chief Security & Privacy Officer at Embold Health, notes that HITRUST’s clear rules and proven checking process are important for securing AI in healthcare and gaining customer trust.
HITRUST’s software platform, MyCSF, simplifies assessments by letting organizations use validated vendor controls. This cuts down repeated work when different AI services are used. This is helpful for IT managers in larger hospital or clinic systems with multiple vendors.
The ISO/IEC 42001:2023 is an international standard for Artificial Intelligence Management Systems (AIMS). It offers healthcare providers and tech partners a clear framework focused on ethical AI, risk control, data safety, and clear operations.
ISO 42001 uses the Plan-Do-Check-Act (PDCA) cycle, which helps improve AI systems continually. Healthcare managers create policies for governance, make sure everyone is accountable, audit AI behavior regularly, and update rules as risks change.
The standard works well with other healthcare frameworks like ISO/IEC 27001 (Information Security) and ISO/IEC 27701 (Privacy). This helps include AI risk management in existing security and privacy programs without making things confusing. It also makes reporting to regulators easier.
KPMG Switzerland, a consultant in AI governance, points out that ISO 42001 also deals with risks from third-party AI vendors. This is common in healthcare, where outside AI services help with office work, diagnostics, or patient communication. The standard asks for vendor assessments, contract protections, and audits to make sure outside AI fits healthcare’s security and ethics rules.
For U.S. medical practices, using ISO 42001 matches well with upcoming rules like the EU AI Act and NIST’s voluntary frameworks. This helps healthcare groups meet rules at home and abroad while using AI responsibly.
One clear use of AI in healthcare is automating everyday office tasks like scheduling appointments, answering phones, patient triage, and billing questions. Companies like Simbo AI offer AI-powered call automation, helping healthcare providers work more efficiently.
Automating calls reduces wait times and improves patient access without losing the personal touch when complex problems come up. For medical admin and IT managers, these systems cut costs, free staff for more important jobs, and offer 24/7 service with AI assistants.
But adding AI to workflows needs careful risk management. Frameworks like NIST’s AI RMF and HITRUST certification help health groups check and control risks related to data security, patient privacy, voice recognition accuracy, and fixing mistakes in automated answering.
It is also important to clearly show when AI handles communication and when a human answers. This helps patients understand and trust digital tools. Healthcare privacy laws like HIPAA must also be followed closely.
Successful use means watching AI performance all the time. Dashboards can show system uptime, how calls are resolved, and patient satisfaction connected to AI. This data helps healthcare managers improve AI use carefully and show they follow rules.
Using AI automation with strong governance helps health groups get ready to use AI well, improve patient care, and follow complex U.S. healthcare rules.
Healthcare providers in the U.S. must not only use AI but also manage it properly to protect patient data, be fair, and follow rules. Important parts of building trust include:
IBM’s Institute for Business Value finds that about 80% of business leaders say AI explainability, ethics, bias, or trust are big challenges for generative AI. Using known frameworks makes AI use smoother and more helpful in healthcare.
Healthcare managers, practice owners, and IT leaders who want to match AI risk management with standards can follow these steps:
Following these steps helps U.S. healthcare groups use AI safely and gain stronger trust from patients and workers.
Using AI well in healthcare needs careful planning, following trusted security and governance rules, and keeping communication open with all involved. U.S. medical practices dealing with AI systems that affect patients and data should use standards like NIST AI RMF, HITRUST certification, and ISO 42001 to make sure AI solutions are safe, ethical, and helpful to patient care and clinic work.
The AI RMF is designed to help individuals, organizations, and society manage risks related to AI. It promotes trustworthiness in the design, development, use, and evaluation of AI products, services, and systems through a voluntary framework.
It was created through an open, transparent, and collaborative process involving public comments, workshops, and a Request for Information, ensuring a consensus-driven approach with input from both private and public sectors.
The AI RMF was initially released on January 26, 2023.
NIST published a companion AI RMF Playbook, an AI RMF Roadmap, a Crosswalk, and Perspectives to facilitate understanding and implementation of the framework.
Launched on March 30, 2023, this Center aids in implementing the AI RMF and promotes international alignment with the framework, offering use cases and guidance.
On July 26, 2024, NIST released NIST-AI-600-1, a Generative AI Profile that identifies unique risks of generative AI and proposes targeted risk management actions.
No, the AI RMF is intended for voluntary use to improve AI risk management and trustworthiness.
It builds on and supports existing AI risk management efforts by providing an aligned, standardized framework to incorporate trustworthiness considerations.
NIST provides a public commenting process on draft versions and Requests for Information to gather input from various stakeholders during framework development.
The goal is to cultivate trust in AI technologies, promote innovation, and mitigate risks associated with AI deployment to protect individuals, organizations, and society.