Mixed-methods research uses both numbers and people’s ideas to get complete information. Quantitative research deals with numbers and statistics. Qualitative research gathers what people think, feel, and experience. Using both methods in healthcare AI helps study how well the technology works and how patients, doctors, and others feel about it.
For example, the AIDE Project, funded by the Economic and Social Research Council and Japan Science and Technology Agency from 2020 to 2023, used mixed-methods to study AI in healthcare. Researchers from medicine, ethics, and social sciences worked together. They showed how important it is to combine hard data and the views of people involved to make AI work well for everyone.
The team collected data by surveys, interviews, and public talks. This helped them find issues about openness, ethics, responsibility, and trust in AI. Their findings helped make better rules and safer ways to use AI in healthcare.
The AIDE Project found that including people affected by AI during its creation helps build acceptance and trust. Stakeholders include patients, doctors, nurses, managers, policy makers, and IT workers. Each group has different worries and hopes about AI use.
For example, public events by the project let people share concerns about data privacy, fair AI systems, and who is responsible when AI decisions affect care. This is very important in the United States where trust in medical technology can change how new tools are used.
Medical practice managers in the U.S. can benefit by asking for feedback during AI adoption. Listening to staff and patients can help spot problems early and fix them before full use. It also meets strict U.S. laws like HIPAA that protect patient information.
The AIDE Project researched how AI can help with diagnosing illnesses, planning treatments, and making decisions. But the project said AI cannot be placed into the system without thinking about social and organizational factors.
One goal was to make sure AI helps everyone fairly, especially in countries like the U.S. that have many different groups with different healthcare needs. Using mixed methods showed differences in opinions and readiness among these groups. This helped AI makers adjust their tools properly.
The research also showed the need to teach healthcare workers and patients about AI. The project held events to explain how AI works, what it can and cannot do, and what safety steps exist. This helps people understand AI better and use it carefully.
Hospitals and medical offices in the United States need to run efficiently. AI workflow automation helps here. For example, phone automation services like Simbo AI can improve front-office work.
Simbo AI creates phone automation designed for healthcare needs. Their AI understands what callers want and handles usual questions well. For managers and IT staff, this means fewer mistakes and happier patients because the system can respond anytime.
Besides phones, AI also helps with clinical notes, billing, and patient monitoring. For example, AI can turn doctor notes into electronic records quickly, giving doctors more time with patients.
With more demand and fewer providers in the U.S., AI workflow systems like Simbo AI’s phone tools give practical ways to improve efficiency and keep care quality.
Healthcare is sensitive, so using AI is not just about technology. Ethical and legal rules are important. The AIDE Project focused on openness and trust, which fits U.S. rules from agencies like the FDA and OCR.
Mixed-methods research that listens to stakeholders can help hospitals and AI creators make tools that respect patients’ rights and ethics. Issues like bias in AI, data safety, and clear responsibility need careful checks.
Hospitals adopting AI must carefully check that technology does not cause unequal care or harm privacy. The AIDE Project’s public involvement model shows how U.S. healthcare can include people to build trust in AI.
The AIDE Project was run by Professors Jane Kaye (Oxford) and Beverley Yamamoto (Osaka University). Their work showed how AI in healthcare needs teamwork across law, ethics, medicine, and social science.
For U.S. healthcare groups, this means teams should have IT specialists, doctors, ethics experts, legal advisors, and patient advocates. Having different views helps make good decisions and create AI tools that work well and are fair.
This approach helps handle complex U.S. healthcare workflows and makes AI setup easier and better.
In 2023, the AIDE Project held online meetings about AI rules, ethics, and trust. Researchers gave talks and answered questions.
For U.S. healthcare managers and IT workers, this shows how to build trust inside their groups. Talking openly about AI, listening to staff and patients, and including them in choices can reduce doubts and increase acceptance.
Medical offices in the U.S. face problems like many calls, missed appointments, and slow paperwork. AI answering services like Simbo AI help fix these problems.
Simbo AI uses natural language processing to understand what patients say and respond well. Unlike normal automated systems that follow strict scripts, Simbo AI adjusts to different ways people speak and solves questions fast. It also records call details and helps schedule appointments, reducing errors and office work.
By automating phone tasks, office managers can use staff for patient care and important jobs. Simbo AI follows healthcare rules and protects patient data, making it a good choice for clinics and hospitals.
This service is also a start for bigger AI use, showing real benefits from technology based on research methods that include input from users.
Using AI in healthcare needs a clear understanding of how it affects patients, providers, and managers. Mixed-methods research like the AIDE Project helps collect many views and build tools that respect ethics and practical needs.
U.S. healthcare groups should focus on building trust, being open, and following laws when adding AI. Involving different groups, working across fields, and using AI tools like Simbo AI’s phone services can improve both efficiency and patient care.
As AI grows, combining technical work and social input helps medical offices adopt new tools that help the whole team and the people they serve.
The AIDE Project stands for Artificial Intelligence in Healthcare for All, funded by ESRC-JST, focusing on best practices for AI in healthcare to benefit the UK and Japan.
The project is directed by Professor Jane Kaye from HeLEX and Professor Beverley Yamamoto from Osaka University, involving an interdisciplinary team.
The project began in January 2020 and is extended until December 2023.
The main goal is to develop effective strategies for stakeholder engagement in the design and implementation of AI technologies in healthcare settings.
The project adopted a mixed-methods approach, combining empirical research with qualitative engagement from stakeholders.
The research focused on AI usage in healthcare, stakeholder perceptions, desired engagement mechanisms, and developing a trust-based engagement platform.
Topics included an introduction to AI in healthcare, public perceptions on regulating AI, and ethical considerations from a public perspective.
Each event featured presentations by researchers, followed by discussions and Q&A sessions to gather participant insights on building trust with AI.
Findings from over three years were shared, focusing on AI implementation in healthcare and strategies for sustaining trust.
The recorded sessions are available as podcasts on the HeLEX project website, maintaining participant anonymity.