Artificial intelligence (AI) is becoming a big part of healthcare in the United States. It helps with diagnosing diseases, managing patient appointments, and automating administrative tasks. AI has the chance to change how healthcare providers work. But, people’s trust and acceptance of AI differ depending on things like age, gender, and education. Healthcare workers need to understand these differences to use AI well and improve healthcare services.
Recent surveys show how Americans feel about AI in healthcare. A Pew Research Center survey from December 2022 included over 11,000 adults in the U.S. It found that almost 60% of people would feel uncomfortable if their healthcare providers used AI for diagnosis or treatment advice. Only 39% said they would feel comfortable with this. These results show that many people have worries. They mainly worry about accuracy, losing personal connection with doctors, and privacy.
When asked about health outcomes, only 38% of people believe AI can help by diagnosing better or suggesting better treatments. Around 33% think AI might make health outcomes worse. About 27% believe AI will not affect health much at all. This shows that many people do not trust AI enough to replace or help doctors in important decisions.
Age affects how people think about AI in healthcare. Studies show younger adults tend to be more open to AI than older adults. Younger people have grown up using technology and feel more comfortable with AI tools.
Older adults may have less experience with AI. They might worry more about security, privacy, and losing personal contact with their healthcare providers. For example, a study in Germany by Brauner et al. (2025) found that younger people saw more benefits in AI, while older ones focused on risks and losing human contact. Though this was in Germany, similar feelings are seen in the U.S.
Healthcare workers should explain that AI helps doctors but does not replace them, especially to older patients. Clear and simple information about AI’s safety and benefits can help build trust with older people.
Gender also affects trust in AI for healthcare. Research shows men generally trust AI more than women. The Pew Research Center survey found a similar pattern: men are more comfortable with AI in healthcare.
Women may be more cautious because they worry about privacy and personal care. Many women are caregivers and value accuracy, empathy, and confidentiality. They often prefer human interaction over AI decisions.
Healthcare providers should keep these differences in mind. For example, when using AI phone systems or chatbots, it is good to also offer human help. This helps patients, especially women, who prefer talking to a person.
Education plays a big role in how people see and trust AI. People with higher education usually understand AI better and are more likely to trust it. The Pew Research Center found that more educated Americans support using AI in healthcare more than those with less education.
This might be because educated people know how AI works and can judge information better. They often have more experience with AI outside healthcare, making them more open to AI in medical tools.
Healthcare providers can help by creating patient education about AI. Simple brochures, videos, or talking during appointments can explain how AI helps make care better and keeps privacy safe. This can reduce worries for patients with less education.
To use AI well in healthcare, people’s trust and concerns about risks must be addressed. The Brauner et al. study found that beliefs about benefits and risks explain most of why people accept or reject AI. Benefits like better accuracy or faster diagnosis make people accept AI. Risks like mistakes, privacy problems, or losing human contact make people reject it.
For example, about 40% of Americans believe AI can reduce medical errors, but 27% think it might cause more mistakes. Also, 57% worry AI will hurt the relationship between patients and doctors. Many fear machines will replace personal care. About 37% worry AI might harm the safety of medical records.
These concerns differ by groups. Older people and women care more about risks. Younger people and men focus more on benefits. Education and knowing about AI also change these feelings.
Healthcare managers should help patients understand that AI supports doctors and does not replace them. Explaining how data is kept safe and how doctors still check AI decisions can ease worries about security and losing personal care.
AI can also help with work tasks in healthcare offices. For example, AI can manage phone calls, appointment schedules, and answering services. Companies like Simbo AI make AI phone systems that can answer common questions, confirm appointments, and handle simple calls. This helps reduce work for office staff while still helping patients.
Using AI for these tasks has benefits:
But some patients may feel uncomfortable using AI systems. Some groups worry about machines being impersonal or hard to use.
To fix this, healthcare offices should include easy ways to talk to real people if AI can’t help. They should clearly say AI tools help but do not replace human contact. This will help patients feel better about using AI.
Staff training is important too. Staff should explain AI tools to patients during check-in or on websites. Messages can be changed to match patients’ needs, like reassuring older adults that AI systems are safe and easy to use.
Knowing how demographics affect AI acceptance can help healthcare workers introduce AI better. Some important points are:
Research, like a review led by Sage Kelly, supports that trust, usefulness, and good attitudes help people accept AI in many industries, including healthcare. Being open about how AI works, making it easy to use, and showing real benefits can make people more willing to use AI tools.
By paying attention to these demographic differences and using AI carefully, healthcare managers and IT staff can make work smoother, reduce patient wait times, and improve care, while keeping patients’ trust.
60% of U.S. adults report feeling uncomfortable if their healthcare provider used AI for diagnosis and treatment recommendations, while 39% said they would be comfortable.
Only 38% believe AI would improve health outcomes by diagnosing diseases and recommending treatments, 33% think it would worsen outcomes, and 27% see little to no difference.
40% of Americans think AI use in healthcare would reduce mistakes made by providers, whereas 27% believe it would increase mistakes, and 31% expect no significant change.
Among those who recognize racial and ethnic bias as an issue, 51% believe AI would help reduce this bias, 15% think it would worsen it, and about one-third expect no change.
A majority, 57%, believe AI would deteriorate the personal connection between patients and providers, whereas only 13% think it would improve this relationship.
Men, younger adults, and individuals with higher education levels are more open to AI in healthcare, but even among these groups, around half or more still express discomfort.
Most Americans (65%) would want AI used for skin cancer screening, viewing it as a medical advance, while fewer are comfortable with AI-driven surgery robots, pain management AI, or mental health chatbots.
About 40% would want AI robots used in their surgery, 59% would not; those familiar with these robots largely see them as a medical advance, whereas lack of familiarity leads to greater rejection.
79% of U.S. adults would not want to use AI chatbots for mental health support, with concerns about their standalone effectiveness; 46% say these chatbots should only supplement therapist care.
37% believe AI use in health and medicine would worsen health record security, while 22% think it would improve security, indicating significant public concern about data privacy in AI applications.