AI governance means the rules and systems that control how AI is made, used, and kept safe in healthcare organizations. It aims to make sure AI systems work safely, follow ethical rules, and protect patient rights while meeting legal standards.
As more healthcare providers use AI for clinical and office tasks, governance helps lower risks like bias, privacy issues, and unclear responsibility. Researchers from Duke-Margolis say a good governance system “makes clear what tools are used, standardizes risk checks, and keeps records” in health systems. Governance is important in healthcare because AI decisions can affect patient health and legal matters.
The American Medical Association (AMA) showed that about two-thirds of doctors used AI in 2024, a 78% increase from 2023. AI helps with diagnosing patients, planning treatments, talking to patients, and office jobs like paperwork. This fast growth is bigger than many health systems’ current ability to manage AI safely, causing concerns about safety and control.
Margaret Lozovatsky, MD, from AMA, says clear governance is needed right now: “The technology moves fast, much faster than we can set up rules. Setting clear governance today is key to avoid problems later.” Good governance can stop health systems from using AI tools too soon that may have bias or give harmful advice.
Experts have listed main parts needed for good AI governance in healthcare. These parts help health systems watch over AI and keep teams responsible.
Some big health systems in the US have set up AI governance programs to safely use AI in care:
These groups show different ways to govern AI, but they all focus on responsibility, openness, and ongoing checks. Their work can guide smaller or less-resourced practices in managing AI well.
AI governance in US healthcare must follow many complex laws. For example, Texas’ TRAIGA law requires AI use in healthcare to be clear and fair, and stops use of biometric data without permission. It also bans harmful AI results.
At the federal level, agencies like CMS and HHS regulate AI safety in hospitals. There are talks about creating a national AI registry to increase transparency.
The AMA pushes for strong AI governance covering data privacy, cybersecurity, doctor responsibility, and rules about generative AI to lower legal risks and protect patients.
AI has shown clear benefits in automating office work, like phone calls, scheduling, and patient contact. Office staff and IT managers know that poor admin work can hurt care and raise costs.
Simbo AI uses AI to automate front-office phone services. Their tech uses natural language processing to answer routine patient calls, sort inquiries, book appointments, and send calls to the right place.
Using AI like this helps healthcare practices by:
AI in front-office tasks needs strong governance too, like clinical AI, to protect how well operations run and keep data safe. IT teams must carefully check vendors’ bias and privacy policies, and set AI automation rules that fit with health system laws.
Even though AI has many benefits, many US health systems—especially smaller ones—face problems when setting up AI governance:
Experts suggest the following to handle these challenges:
The AMA STEPS Forward® “Governance for Augmented Intelligence” toolkit offers helpful steps for setting up governance based on an organization’s size and tech skills.
The main goal of AI governance in healthcare is to keep patients safe and build trust. Bad AI use can cause wrong diagnoses, private data leaks, and unfair care. Strong governance makes sure AI is clear, understandable, and checked all the time.
Having teams from many fields helps include different views and keep ethical standards when using AI. Being open with patients through consent and clear info about AI helps answer concerns.
Big health systems working with Duke-Margolis say strong governance leads to smoother operations and better patient confidence when it is organized and well maintained.
For people managing healthcare in the US, knowing and using AI governance is very important. As AI use grows, managers must:
Putting effort into clear AI governance helps healthcare groups safely bring in AI technology while protecting patients, following laws, and improving care and efficiency.
The purpose of AI governance in health systems is to ensure safety, minimize risk, standardize risk assessment and mitigation processes, and allow for documentation of AI tool usage within the organization.
A governance framework for healthcare AI typically includes clear principles and goals, predictability regarding information needs, transparency on processes, identification of participants involved, and established accountability and documentation.
The research on health system AI governance was conducted by Duke-Margolis researchers.
The research project had three phases: 1) Health System Working Group, 2) Expert Workshop, and 3) White Paper compilation.
The Health System Working Group focused on sharing learnings among health systems that have implemented their own AI governance processes and understanding the considerations involved.
The aim of the Expert Workshop was to dive deeper into the impact of health system AI governance on various stakeholders.
The research team compiled a white paper that explores the commonalities and differences in AI governance implementation among health systems and offers considerations for those starting the process.
Documentation is important in AI governance as it helps establish traceability of decisions, processes, and evaluations related to AI tool usage and governance in health systems.
Participants in AI governance frameworks are responsible for formulating principles, assessing risks, ensuring accountability, and contributing to the documentation of processes within the governance structure.
Health systems can benefit from implementing AI governance by enhancing operational efficiency, improving patient safety, ensuring compliance with regulations, and fostering trust among stakeholders in the AI tools utilized.