AI tools in healthcare can help improve the accuracy of diagnostics, manage large amounts of data, lower paperwork, and support doctors in making decisions. Many AI projects today focus on tools that help doctors by giving more accurate and data-based advice than older systems that used fixed rules. Besides helping with diagnoses, new AI tools include chatbots that educate patients, services that turn spoken words into written notes automatically, and fast genetic testing to find patient traits quickly.
Even with these improvements, hospital managers and IT staff often find it hard to use AI in daily medical work. The tools must meet the real needs of busy healthcare settings, match different team roles, and work well with many kinds of information. This is why a combined social and technical approach is important.
A sociotechnical approach means thinking about both social parts—such as people, how teams behave, communication, and company culture—and technical parts like software, hardware, and data systems together when making or using technology. In healthcare, this means knowing how medical teams work together, how information moves between departments and systems, and how technology affects people and groups.
Healthcare settings involve many roles: doctors, nurses, office workers, and IT staff all contribute to patient care and handling data. There is often pressure from time and many tasks, and data comes from various sources like electronic health records (EHR), lab results, and patient information. AI tools need to help with work instead of causing problems.
Teamwork is very important for AI to work well. Whether clinicians accept and use AI tools affects how good those tools will be. A workshop held in Salt Lake City, Utah brought experts who work on AI in hospitals. People like Michael Matheny shared their views, and a keynote speaker, Fadia Shaya, talked about challenges clinical teams face when trying new technology.
For AI tools to work, all team members—doctors, nurses, and office staff—must understand and trust the system. If they worry about data being wrong, changes to their workflow, or losing control of patient care, they might resist using AI. The workshop showed that studying how users pay attention, what motivates them, and how they make decisions helps designers create AI tools that fit smoothly into medical work.
Also, sociotechnical methods look at how different team roles share information about AI tools. For example, if a front-office phone AI system does not fit the work of scheduling, billing, and clinical staff, it can cause confusion instead of helping.
Clinical data comes from many places: lab tests, patient histories, images, genetic info, and devices that watch patients in real time. AI tools must put these data together correctly to help clinical decisions. But healthcare data is often split up and kept in different systems using different formats.
A sociotechnical approach needs AI tools that think about this data mix and make sure the information is correct. This means getting data quickly and accurately, working with current EHR systems, and showing useful results to doctors at the right time. The workshop talked about ideas like SALIENT that help manage complex information and track performance over time.
If information is not managed well, AI tools might give wrong or late advice. This lowers doctors’ trust and makes them less likely to use the tools.
One way the sociotechnical design helps is by automating workflows. In many US medical offices, answering phone calls, scheduling appointments, helping patients, and handling after-hours calls require a lot of work from people.
AI systems like those from Simbo AI can manage common phone tasks without needing a person. These systems use advanced AI to answer patient questions, set or confirm appointments, remind patients of visits, and handle less urgent requests. This lowers the number of calls front desk staff must take, giving them more time for harder tasks.
But making this work needs a clear understanding of the office’s daily work. The AI must connect smoothly with scheduling software and data systems. It also must know when to transfer a call to a live person. Setting these rules involves working with office staff and doctors to make sure the AI fits daily routines.
AI tools like these reduce work for people and cut delays or mistakes in phone calls. In busy offices where calls can be many, automated answering helps keep patients satisfied by giving quick and steady answers.
A key point from the AI workshop was that clinicians need to trust AI tools. Being open about how AI makes decisions, updating the tools often based on feedback, and clearly explaining AI results help build trust.
Some models, such as UTAUT (Unified Theory of Acceptance and Use of Technology), are used to study and support using technology in clinics. These models look at things like how useful the tool seems, how easy it is to use, how people around you influence you, and support conditions. All these matter when introducing AI tools.
Trust also matters for patients. They need to know that AI tools improve care and do not replace the human care teams provide.
Most writing about AI in healthcare focuses on how well the technology works, not on how it fits with social and team factors. The workshop pointed out this gap. It encouraged healthcare workers to share real experiences to improve how AI is used in different places.
AI works better when it is made with input from users like doctors, staff, and patients. Designing with users helps make sure tools meet real needs and fit actual work. This stops problems with AI being too complex or hard to use.
Also, AI tools should be checked regularly after being put in place. Studies over time watch how tools are used, what benefits they bring, and if people are satisfied. This helps fix problems as routines change.
The US healthcare system is very complex. There are many rules and different types of providers, from big hospitals to small clinics. AI tools must be flexible and suited to each local setting.
Rules like HIPAA (Health Insurance Portability and Accountability Act) set strict limits on privacy and security of patient data. AI tools must follow these rules to use data legally and ethically. Clinics also face different rules about payment for AI-related care, which differ by state and insurer.
Leaders and IT managers see AI not just as new technology but also as a challenge to operations. They need to train staff, change workflows, and keep giving support.
Simbo AI focuses on front-office phone automation and answering services. This helps with a common problem in US healthcare: many patient calls and communication needs. Their AI tools fit well into existing workflows and work with the many roles in administrative teams.
By handling repetitive phone calls, Simbo AI lets staff do other jobs and helps keep patients involved without harming care quality. Their technology thinks about the limits of human attention, how information moves between people and systems, and how healthcare teams are organized to improve work efficiency.
Healthcare leaders and IT managers looking at AI tools learn one clear lesson: people matter as much as technology. Paying attention to teams, data, and daily work helps AI tools last and be useful.
Overall, using AI in healthcare is not just about installing software. It needs a good understanding of the social and technical parts where the tools are used. Sociotechnical methods offer a way to match AI systems with what healthcare teams need and help improve care and operations in US medical settings.
AI-based tools can improve the precision and appropriateness of healthcare, synthesize complex information, and reduce the burden of clinical tasks.
Sociotechnical approaches help ensure that AI tools are responsive to the complex realities of healthcare, considering factors like team dynamics, diverse information sources, and time pressure.
A significant portion of current AI tool development aims at diagnostic support and traditional clinical decision-making, leveraging improved accuracy over rule-based systems.
Emerging applications include conversational agents for patient education, ambient transcription, and rapid phenotyping in genetic testing pathways.
Despite the growing use cases for AI in healthcare, there is a lack of empirical documentation detailing sociotechnical strategies for AI tool design and implementation.
The uptake and effectiveness of AI tools in clinical environments heavily depend on their acceptance and use by clinicians.
Frameworks such as SALIENT for AI development and UTAUT for technology evaluation can be adapted for effective real-world clinical AI implementation.
Trust and transparency are crucial for fostering acceptance of AI tools among clinicians and ensuring the tools augment rather than disrupt clinical practices.
Cognitive evaluation approaches help understand aspects like attention and motivation in designing AI-based tools, aiming to enhance their effectiveness in clinical settings.
The goal of the workshop is to share real-world experiences with the design and implementation of AI tools in clinical settings, fostering connections and collaborative learning.