Large multi-modal models are a new type of AI. Older AI systems usually handle only one kind of data, like text. But LMMs can work with different types like text, images, and videos all at once. This helps them communicate in ways similar to humans and do complex tasks that machines used to find hard.
In healthcare, LMMs have shown use in several important ways:
Platforms like ChatGPT, Bard, and Bert have become common since 2023 and are examples of LMMs.
Even though LMMs are helpful, they come with risks that need attention, especially in the U.S. healthcare system:
To handle these challenges, governments and healthcare groups must plan well and work together.
The WHO suggests governments have an important role in making rules and support systems for ethical AI in healthcare. These steps are useful for U.S. leaders who want to bring AI into healthcare safely and fairly.
Governments should put money into building systems that make sure AI follows ethical and technical rules. This involves:
Funds can be given as grants to AI creators, developing public data banks, or making tools that test AI results.
WHO advises naming a group to approve AI healthcare tools. In the U.S., agencies like the FDA already check health tech. They should also check AI tools for ethics and medical usefulness.
Regulation should include:
This helps doctors and patients trust AI by holding it responsible.
Good AI rules need input from many groups during design and use. Governments should make AI creators work with:
This approach lowers chances that AI causes unfair care or bigger health gaps.
Many U.S. health workers now use AI for decisions. But studies show some trust AI too much without checking carefully.
Governments can fund training programs to teach clinicians about AI limits. Rules can ask AI makers to design interfaces that show when AI is unsure and remind users to double-check important choices.
Being open helps people trust AI. Government groups should require developers to publish results of independent checks. These should show how AI performs across different population groups and note any biases.
Helping doctors and patients understand AI’s abilities and risks is very important in a diverse country like the U.S.
To help close health gaps, governments must make sure AI tools are available not only to big hospitals but also to small clinics and rural places. Ways to do this include:
Through focused funding and policies, AI can help improve healthcare fairness instead of making it worse.
AI can automate front-office work in healthcare, such as answering phones. This helps medical admins and IT managers improve how patients are served and reduce work.
Companies like Simbo AI use AI to answer calls, set up appointments, give patient info, and handle common questions without staff needing to watch all the time. This saves staff time, cuts patient waiting, and lowers office costs.
Based on WHO’s advice, these AI tools should follow ethical rules:
Following these rules helps medical offices use AI while keeping patient trust and safety.
Following WHO advice, the U.S. can improve AI ethics by requiring outside reviews of healthcare AI tools, including LMMs:
Regulators can team with third parties to do these checks. This keeps AI safe and effective all through its use.
Dr. Jeremy Farrar from WHO says AI can improve health only if all involved understand and manage its risks. Dr. Alain Labrique, also from WHO, stresses that governments must work together on AI rules.
For the U.S., this means federal, state, and local governments need to join with tech companies, health organizations, and patient groups. Working together helps make rules that fit the country’s healthcare and diverse people.
It’s also important to include healthcare workers and IT managers on the ground. Their input ensures AI tools work well in real healthcare settings.
As AI tools like large multi-modal models become part of U.S. healthcare, leaders in medical offices have an important job to use them carefully. Understanding WHO’s ethics and governance advice—and applying it to local needs—can help avoid problems with bias, wrong information, and loss of patient trust.
Medical leaders should:
By following these steps, healthcare groups can use AI to improve patient care, reduce paperwork, and help create fairer health services in the U.S.
LMMs are a type of generative artificial intelligence technology capable of accepting diverse data inputs, like text and images, and generating varied outputs. They can mimic human communication and perform tasks not explicitly programmed.
LMMs can enhance healthcare through applications in diagnosis, patient guidance, clerical tasks, medical education, and drug development, thereby improving operational efficiency and patient outcomes.
Risks include the production of false or biased information, lack of quality in training data, ‘automation bias’ in decision-making, and cybersecurity vulnerabilities that endanger patient data.
Governments should invest in public infrastructure for ethical AI use, ensure compliance with human rights standards, assign regulatory bodies for assessment, and conduct mandatory audits post-deployment.
Developers should include diverse stakeholders, including medical providers and patients, in the design process to address ethical concerns and ensure that LMMs perform accurate, well-defined tasks.
‘Automation bias’ refers to the tendency of healthcare professionals and patients to overlook errors made by AI systems, potentially leading to misdiagnoses or poor decision-making.
Transparency fosters trust among users and stakeholders, allowing for better oversight, ethical responsibility, and informed decision-making regarding the risks and benefits of LMMs.
Independent audits help ensure compliance with ethical and human rights standards by assessing LMMs post-release, publishing findings on their impact and effectiveness.
If properly developed and utilized, LMMs can provide tailored health solutions that improve access to care, particularly for marginalized populations, thus lowering health disparities.
LMMs must adhere to ethical principles that protect human dignity, autonomy, and privacy, ensuring that AI technologies contribute positively to patient care and public health.