Artificial Intelligence (AI) is becoming an important part of healthcare in the United States. It helps with many tasks, like analyzing medical images quickly and automating routine work in medical offices. But AI also brings challenges that need careful handling, especially when it affects patient care and data privacy. To guide the responsible use of AI in healthcare, experts and government groups created a set of principles called FAVES. This framework makes sure AI systems produce fair, appropriate, valid, effective, and safe outcomes for patients and healthcare workers.
This article explains the FAVES principles and why they matter in healthcare. It is meant for medical practice administrators, owners, and IT managers in the U.S. It helps them understand their responsibilities when adding AI tools. It also shows how AI can automate work to ease staff’s burden, improve patient experience, and keep high ethical and legal standards.
The FAVES acronym stands for:
These principles were officially introduced as part of the federal government’s efforts to promote ethical AI through President Biden’s Executive Order 14110, signed in October 2023. The order sets a clear federal plan for over 50 agencies, including the Department of Health and Human Services (HHS), the Food and Drug Administration (FDA), and the Centers for Medicare and Medicaid Services (CMS), to make sure AI is developed and used responsibly in healthcare.
Fairness means AI tools must treat all patients equally no matter their race, gender, ethnicity, or income. Many AI systems learn from past healthcare data, which may include biases. For example, if AI is mostly trained on data from one group, it might not work well for others.
CMS watches for bias in AI decision tools to stop unfair treatment in things like medical decisions or treatment advice. Making AI fair helps protect patients and healthcare providers by making care more even for different groups.
Healthcare groups like Cedars-Sinai join federal projects to promote fairness. Their Chief Information Officer, Craig Kwiatkowski, says fairness is key to keeping patient trust and good care for all groups.
Appropriateness means AI tools should be used in the right situations and match the needs of a healthcare place. An AI tool made for cancer diagnosis should not be used for other diseases or in places where it won’t give good results.
The White House-led group of nearly 40 health systems, including Cedars-Sinai and OSF HealthCare, stresses the need for appropriateness. These groups make sure AI is developed and used to solve clear clinical or administrative problems with knowledge of the patients served.
For example, AI used to check medical images for early cancer detection fits well because it matches the purpose and helps doctors care for patients. But using AI in wrong situations can cause wrong diagnoses or bad care, breaking this rule.
Validity means AI systems must give correct and consistent results based on good data. They need to be tested well on different datasets to prove they work in real healthcare situations.
The FDA has approved over 690 AI-enabled medical devices for diagnosis and treatment. This shows valid AI tools are becoming more accepted and regulated. These devices go through strict tests to ensure their results can be trusted.
Healthcare groups must keep checking their AI tools. Cedars-Sinai and the U.S. Department of Veterans Affairs have systems to watch AI performance all the time, making sure the models keep working well without problems.
Effectiveness measures if AI tools really improve patient results, work efficiency, or patient experience. AI must prove itself outside of labs—in everyday healthcare.
Studies show AI raised primary care capacity by 11% at Cedars-Sinai, like adding three new clinics, and allowed over 6,900 virtual visits. These show effective AI improves access and reduces work for healthcare workers.
AI also helps with tasks like paperwork, appointment booking, and billing, lowering stress for clinicians—a big issue in healthcare. The Biden-Harris Administration and others focus on effectiveness by supporting AI solutions that bring clear value.
Safety is critical when using AI in healthcare. AI tools must not cause risks to patients or providers. This means protecting health data under HIPAA, making sure AI doesn’t give harmful advice, and including human checks.
HHS leads efforts to set safety standards that match the FAVES rules. It enforces laws against discrimination and supports ways to find and reduce AI risks.
For example, SimboConnect’s AI Phone Agent, used for front-office phone work, encrypts calls end-to-end. This keeps patient talks private while automating simple office requests like medical record inquiries.
AI and automation change how healthcare workflows work. AI can take over many admin tasks that use up clinicians’ time, like patient check-ins, insurance pre-authorizations, and follow-up calls.
There is a huge amount of paperwork in U.S. healthcare. For each patient visit, staff fill out more than a dozen forms. AI can cut this time by grabbing patient info from forms, filling electronic health records, and making sure billing codes are right. This lowers errors and speeds the process.
The company Simbo AI works in this area. Their AI phone system helps medical offices handle calls, appointments, and record requests faster. By using AI to answer calls right away, Simbo cuts wait times, lowers front desk stress, and improves patient satisfaction.
These AI tools also include ways to manage risks and keep data safe. Following the FAVES safety rule, the systems use encryption and control access to protect patient information. They keep fairness too, by making communication accessible and suited to different healthcare needs.
Automating workflows helps practice owners and managers by lowering clinician burnout, making operations run better, and letting staff focus on more important tasks like patient care and complex coordination.
The FAVES framework is supported by many government agencies and healthcare organizations. These groups work to standardize AI use across the country.
Healthcare groups like OSF HealthCare, Cedars-Sinai, and the Department of Veterans Affairs have set up AI committees to create trustworthy AI governance. These teams stress transparency by telling users when AI made content without human review, helping build trust.
Medical practice leaders and IT managers in the U.S. should focus on these points when thinking about AI:
In today’s healthcare, using AI with the FAVES principles is not optional. Ignoring these rules risks legal trouble, damage to reputation, and, most importantly, harm to patients.
The FAVES principle offers a basic guide for putting AI into U.S. healthcare safely and responsibly. By focusing on fairness, appropriateness, validity, effectiveness, and safety, healthcare administrators, practice owners, and IT managers can use AI tools that improve patient care and office work without breaking ethics or laws.
With help from federal agencies, healthcare groups, and AI developers, and tools like Simbo AI’s front-office automation, U.S. healthcare practices can use AI in a way that helps care delivery, supports clinicians, and protects patients.
Concerns include risks of fraud, discrimination, bias, and disinformation due to irresponsible AI use, necessitating ethical and effective AI application in healthcare.
FAVES stands for Fair, Appropriate, Valid, Effective, and Safe outcomes from AI in healthcare, aligning the industry on ethical AI use.
He signed an Executive Order and initiated an AI healthcare initiative focusing on safe, secure, and trustworthy AI use in healthcare.
The WHO issued recommendations for the ethical use of large multi-modal models in healthcare, emphasizing safety and population health.
NIST is tasked with developing guidelines and standards for evaluating AI systems, ensuring a structured approach to AI governance.
Federal legislators are conducting hearings to gather information and establish policies that support safe AI use and data protection in healthcare.
They aim to ensure federal policies help healthcare organizations effectively manage AI benefits and risks while enhancing data security.
The FDA has authorized over 690 AI-enabled devices aimed at improving medical diagnosis, demonstrating growing integration of AI in healthcare.
ONC proposed a rule to increase transparency in algorithms and implement risk management approaches for AI-based technologies in healthcare.
ONC aims to provide public education on safe and responsible AI usage across the healthcare ecosystem to support informed adoption.