AI systems made for healthcare, like those that recognize images in radiology or help with treatment decisions, rely a lot on human users—doctors, medical assistants, and staff. But trusting AI the right amount is hard. Sometimes, people trust AI too much and accept all its suggestions without thinking. Other times, they don’t trust it enough and ignore useful advice. Both ways can cause mistakes, slow work, and harm patients.
Even though AI has improved technically, many healthcare workers get these tools without proper training on how to use them well. Hussein Mozannar, who led the MIT study, said, “People often get AI tools without any training to know when it helps. That is different from most tools, which usually come with tutorials. But AI seems to miss that.”
Without good onboarding, working with AI can be uneven, hurting the accuracy and trust in medical decisions. This matters most for healthcare administrators and IT managers who add AI tools to day-to-day work while keeping safety and rules in mind.
MIT and the MIT-IBM Watson AI Lab did research to make an automated system that teaches users when to trust AI and how to work well with it. The smart part is using natural language rules—simple sentences that explain when AI is reliable and when to be careful.
For example, a rule might say, “Ignore AI predictions when the image is taken on a highway during the night.” These rules tell users about times when AI might make mistakes or work well.
The onboarding system works in three steps:
This way is different from old methods where experts make fixed training materials. The new system keeps updating as AI gets better and learns from how people use it.
Tests showed that the onboarding system helped users do better in tasks like medical image recognition, especially when images were unclear. Users who went through the training had about 5 percent higher accuracy compared to users who didn’t get training.
Users who only got advice on when to trust AI but no practice actually did worse. Giving advice alone confused users and slowed their decisions. Hussein Mozannar said, “Just giving recommendations confuses people. They don’t know what to do, and it disrupts their workflow. Also, people don’t like being told what to do.”
For medical practice managers, this shows that just telling staff when to trust or distrust AI is not enough. Training that involves users understanding patterns of AI reliability leads to better results.
David Sontag, a researcher on this project, said that medical education and clinical trials should rethink how they train people to work with AI. For healthcare administrators and practice owners in the U.S., this means teaching not just doctors but also support staff and IT teams about AI.
As AI tools become more common in decision support, patient scheduling, and office automation, ongoing training will be important. Training like the MIT system might be added to continuing medical education or customized staff sessions.
Such training can:
Current AI training often needs human experts to make materials. This can cost a lot, take time, and be hard to update as AI changes fast. The automated onboarding from MIT helps solve this by learning from ongoing data about how people interact with AI.
This training changes as AI improves or as users’ experiences change. It can work in many areas beyond healthcare, like social media or programming, making it useful in different medical practice settings in the U.S.
One limit is that the training works best when there is a lot of data on how users work with AI. Smaller clinics or new AI systems with little data might not get as much benefit at first.
AI is already helping in front-office phone tasks in healthcare. AI answering systems reduce work for staff, improve patient scheduling, give timely information, and help patients.
Simbo AI is a company in the U.S. that makes AI front-office phone automation using natural language processing to handle calls well. But even with these systems, administrators must watch for when humans need to step in and correct hard or sensitive questions.
Using onboarding training helps office staff know when to trust AI on calls. Training can help workers:
By combining AI phone tools like those from Simbo AI with training that teaches proper use, medical offices can work more efficiently without losing quality or patient care.
Building trust in AI in healthcare also means following laws and ethics. Research shows seven key things needed for trustworthy AI in healthcare:
Healthcare managers using AI must follow these rules and give ongoing training to keep users aligned with them. Testing programs called regulatory sandboxes and audits can also help ensure AI is safe before full use.
Because AI tools are complex and used more in healthcare, administrators and IT managers in the U.S. can take steps like:
AI offers chances to improve healthcare delivery and management, but success depends on people knowing when and how to trust AI. The MIT-IBM Watson AI Lab’s natural language rule-based training system shows one good way to build this understanding widely. U.S. healthcare administrators should think about using or making similar training to get safer, better, and more patient-centered care.
The researchers focus on creating a customized onboarding process that helps individuals learn when to trust and collaborate with AI assistants, ultimately improving the accuracy of human-AI interactions.
The onboarding process identifies situations where users over-trust or under-trust AI by formulating natural language rules. Users then practice with training exercises based on these rules, receiving feedback on their performance.
The system led to approximately a 5 percent improvement in accuracy during image prediction tasks where humans collaborated with the AI.
Simply informing users when to trust the AI, without any training, actually resulted in worse performance, making the onboarding process crucial.
The automated onboarding system adapts by learning from the specific data of human and AI interactions, making it suitable for a variety of applications.
Existing methods are often manual and difficult to scale, relying on expert-produced training materials that may not evolve with AI capabilities.
The researchers tested the system with users on tasks such as detecting traffic lights in blurry images and answering multiple-choice questions across various domains.
Onboarding significantly improved user accuracy, whereas simply providing recommendations without onboarding led to confusion and decreased performance.
The effectiveness of the onboarding stage is limited by the amount of available data; insufficient data makes the training less effective.
Future studies aim to evaluate the short- and long-term effects of onboarding, leverage unlabeled data, and effectively reduce the complexity of training regions.