AI is being used more and more in healthcare in the United States. It helps with things like giving health information, aiding in diagnosis, and handling office tasks. Large language models (LLMs) are a type of AI that can understand and respond to human language. These tools can help areas that do not have enough resources by supplying medical knowledge and quickly answering patients’ questions.
In the U.S., many are interested in AI tools that do repetitive jobs, improve how patients are involved, and assist doctors in making decisions. For example, Simbo AI offers AI-powered phone systems that connect patients and doctors efficiently, reduce wait times, and allow staff to do more important work.
Even with these benefits, healthcare groups must be careful about the problems and risks that come with adding AI to their work.
The World Health Organization (WHO) says it is important to be cautious when using AI, especially LLMs, in healthcare. Using AI quickly without enough testing can cause big problems, such as:
WHO also points out that AI learns from data that might have biases. If an AI hasn’t been trained with data from different groups, it could give wrong advice that hurts some people more than others.
Sometimes, AI might give information that sounds true but is actually made up. This can confuse both patients and healthcare workers.
There are also worries about privacy. AI might use sensitive health data without patients agreeing, which can break privacy rules and trust.
In short, AI can help a lot but also has dangers. Strong safety rules are needed before it is used in hospitals or offices.
To deal with these worries, WHO listed six important ethical rules for AI in health. These rules help healthcare groups in the U.S. use AI the right way:
Healthcare groups should use these rules when choosing and using AI tools. They should work with developers and rule makers to follow ethical standards.
Besides WHO’s rules, researchers made the SHIFT model to help use AI ethically in healthcare. It focuses on five key ideas:
Hospital managers and IT staff should think about these ideas when picking AI vendors and managing AI systems. The SHIFT model helps keep ethics and human values in AI use.
A study from China on LLMs in healthcare shows problems also faced around the world, including the U.S. These models scored only about 42.7% in ethics and safety tests, rising to 50.8% after improvements, but they still have big issues with fairness and bias.
Many hospitals do not have exact rules or review boards that check AI tools carefully before use. Review boards may treat AI like normal software, not seeing its special risks to patient safety and ethics.
There are also few monitoring systems to watch AI performance regularly. This means AI errors or problems might not be noticed until someone is harmed. This lack of checks can reduce trust and cause legal problems.
To fix this, hospitals should:
These actions help make sure AI is used safely and supports healthcare goals.
One clear use of AI in healthcare is in office work, like answering phones and scheduling appointments. For example, Simbo AI offers smart phone systems that answer patient calls fast, help communication, and lower staff workload.
Using AI for phone systems can lead to:
But adding AI to these tasks needs attention to safety and ethics, just like clinical AI. For example:
For medical offices and IT teams in the U.S., choosing AI tools that follow ethical rules and regulations is very important. AI should help staff without risking patient safety or trust.
Patient safety and public trust matter a lot, especially in healthcare. In the U.S., where there are challenges with health access and fairness, AI use must not make inequalities worse or cause new problems.
Not testing AI well, not knowing its limits, or not watching it after use can put patients in danger and reduce trust in doctors and hospitals. WHO’s warning about AI spreading false but believable health information is important in a country where misinformation already exists.
Policy makers and healthcare leaders in the U.S. should:
These steps help make AI safer and more useful for patients and healthcare workers.
Healthcare administrators, owners, and IT managers in the United States face important decisions. AI has the chance to change healthcare work, especially in front-office tasks and care delivery. But success depends on using AI with care, ethics, and safety.
By following guidance from groups like WHO and using models such as SHIFT, healthcare leaders can adopt AI responsibly. Focusing on clear communication, accountability, fairness, and lasting solutions will help protect patient safety, keep public trust, and improve healthcare systems.
The WHO advocates for cautious, safe, and ethical use of AI, particularly large language models (LLMs), to protect human well-being, safety, autonomy, and public health while promoting transparency, inclusion, expert supervision, and rigorous evaluation.
Rapid, untested deployment risks causing errors by healthcare workers, potential patient harm, erosion of trust in AI, and delays in realizing long-term benefits due to lack of rigorous oversight and evaluation.
AI training data may be biased, leading to misleading or inaccurate outputs that threaten health equity and inclusiveness, potentially causing harmful decisions or misinformation in healthcare contexts.
LLMs can produce responses that sound authoritative and plausible but may be factually incorrect or contain serious errors, especially in medical advice, posing risks to patient safety and clinical decision-making.
LLMs may use data without prior consent and fail to adequately protect sensitive or personal health information users provide, raising significant privacy, consent, and ethical issues.
They can generate convincing disinformation in text, audio, or video forms that are difficult to distinguish from reliable content, potentially spreading false health information and undermining public trust.
Clear evidence of benefit, patient safety, and protection measures must be established through rigorous evaluation before large-scale implementation by individuals, providers, or health systems.
The six principles are: protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI.
Transparency and explainability ensure that AI decisions and outputs can be understood and scrutinized by users and experts, fostering trust, accountability, and safer clinical use.
Policymakers should emphasize patient safety and protection, enforce ethical governance, and mandate thorough evaluation before commercializing AI tools, ensuring responsible integration within healthcare systems.