People in the United States have mixed feelings about using AI in healthcare. A survey by the Pew Research Center in December 2022 asked over 11,000 adults about their opinions on AI in medicine. The results showed that many people are uncomfortable with AI diagnosing diseases or making treatment choices. About 60% said they would not be comfortable if their doctors used AI for these tasks, while only 39% said they would be okay with it.
Many worry that AI could harm the important personal connection between patients and doctors. More than half (57%) thought AI would make this relationship worse. Only 13% believed AI might improve doctor-patient care. Also, just 38% thought AI could help improve health results. Others feared it might make things worse or not change much.
Among different AI uses, skin cancer screening is one of the most accepted by the public. About 65% of Americans said they would be willing to have AI involved in their skin cancer checkups. Also, 55% believed AI would make diagnoses of skin problems more accurate.
This could be because skin cancer screening is based on images, and AI can often tell if a mole is dangerous or not. For people who run medical offices, this shows a chance to add AI tools to help doctors find skin cancer early. This can help patients without replacing doctors.
Robots that use AI to help with surgery get more cautious reactions. Only about 40% of Americans would want robots with AI to be part of their surgery. Most, 59%, would rather avoid it. People who know more about these robots like them more. Those who know less tend to say no.
Medical office owners and IT managers should explain clearly how these robots work. Telling patients that AI can help make surgery more precise and safer but does not replace the doctor may make people more comfortable with the technology.
AI chatbots for mental health have the least approval. A large 79% said they would not want to use AI chatbots for mental health help. Many doubt if these chatbots can handle complicated feelings and issues. About 46% think chatbots should only help alongside human therapists, not work alone.
Because mental health care is very sensitive, medical leaders need to be cautious about using AI chatbots. These tools might help with initial screenings or support therapy sessions, but they cannot replace human care and understanding. Using a mix of AI and human care may be safer and more accepted.
Many people worry about their data when AI is used in healthcare. The Pew study found that 37% believe AI will make health record security worse. Only 22% think AI could make it better. This shows many feel uneasy about how AI handles private medical data.
For IT managers and healthcare leaders, it is very important to keep strong cybersecurity and clearly tell patients how their data is protected when AI is used. Patients need to trust that their medical information is safe and follows laws like HIPAA. If these worries are not fixed, introducing AI in healthcare could be harder.
AI can also help with tasks in healthcare offices, like answering phones and scheduling appointments. Companies like Simbo AI create systems that automate these jobs using AI.
Using AI to handle front-office work can make operations smoother, cut mistakes, and make patients happier by answering calls quickly and reducing wait times. This also lets office staff focus on more complex tasks related to patient care.
Some benefits of AI in healthcare workflows are:
Healthcare leaders should check how AI tools work with electronic health records (EHR), protect patient data, and follow laws before adding automation.
Using AI in healthcare also brings ethical and legal questions. A study in Heliyon (2024) by Mennella et al. points out that AI can help improve clinical work but creates some tough challenges.
Some challenges include:
Healthcare leaders and IT managers should work closely with legal experts, policymakers, and technology providers to create rules that keep AI safe and fair.
The Pew survey showed men, younger people, and those with more education and income tend to be more open to AI in healthcare. But even in these groups, many still feel unsure about AI’s role in diagnosis and treatment.
This means healthcare providers should change how they talk about AI depending on who they are talking to. Some groups may need more information to feel comfortable and trust AI tools.
For medical offices in the U.S., the survey results give useful steps to add AI in a way that respects patients.
Using AI carefully and fairly supports healthcare’s goal to provide safe, effective, and personal care while keeping patient trust.
60% of U.S. adults report feeling uncomfortable if their healthcare provider used AI for diagnosis and treatment recommendations, while 39% said they would be comfortable.
Only 38% believe AI would improve health outcomes by diagnosing diseases and recommending treatments, 33% think it would worsen outcomes, and 27% see little to no difference.
40% of Americans think AI use in healthcare would reduce mistakes made by providers, whereas 27% believe it would increase mistakes, and 31% expect no significant change.
Among those who recognize racial and ethnic bias as an issue, 51% believe AI would help reduce this bias, 15% think it would worsen it, and about one-third expect no change.
A majority, 57%, believe AI would deteriorate the personal connection between patients and providers, whereas only 13% think it would improve this relationship.
Men, younger adults, and individuals with higher education levels are more open to AI in healthcare, but even among these groups, around half or more still express discomfort.
Most Americans (65%) would want AI used for skin cancer screening, viewing it as a medical advance, while fewer are comfortable with AI-driven surgery robots, pain management AI, or mental health chatbots.
About 40% would want AI robots used in their surgery, 59% would not; those familiar with these robots largely see them as a medical advance, whereas lack of familiarity leads to greater rejection.
79% of U.S. adults would not want to use AI chatbots for mental health support, with concerns about their standalone effectiveness; 46% say these chatbots should only supplement therapist care.
37% believe AI use in health and medicine would worsen health record security, while 22% think it would improve security, indicating significant public concern about data privacy in AI applications.