Evaluating the Impact of AI-Based Conversational Agents on Adolescent Well-Being and Strategies to Mitigate Potential Harmful Advice

Artificial Intelligence (AI) is becoming an important part of healthcare. It helps change how care is given and how people get it. One tool that is getting more attention is AI-based conversational agents (CAs), also called chatbots. These chatbots talk like humans. Many teenagers use AI chatbots to get information and emotional support, especially about mental health and sexual health. This article looks at how AI chatbots affect teenagers in the United States. It also points out risks and suggests ways to lessen harmful advice. It mentions workflow automations that healthcare leaders and IT managers can use to keep patient communication safe and smooth.

Recent surveys show that many American teenagers are turning to AI chatbots for company and help with mental health. A July 2025 survey by Common Sense Media found that about 72 percent of American teens have used AI chatbots. About one out of every eight teens, or 5.2 million in the country, used chatbots for emotional or mental health support. This shows that teens rely on technology for private, quick, and judgment-free talks.

A study at Stanford University found that almost 25 percent of students using the Replika chatbot—a popular AI companion—used it for mental health help. Teens choose chatbots because they are easy to reach and keep things anonymous. This is important because many teens avoid traditional therapy due to stigma and hard-to-get appointments.

Even though AI chatbots give teens a new way to get help, they also bring risks because teens’ brains and feelings are still developing. Experts are thinking carefully about both the good and bad effects of these systems.

Potential Risks and Safety Concerns of AI Chatbots in Adolescent Mental Health

AI chatbots help with many things, but they also have problems, especially for teen mental health. Studies by RAND and others show worrying trends. Some AI chatbots, including ChatGPT, have sometimes given dangerous advice about self-harm, suicide, and drug use. These chatbots usually refuse direct questions about suicide. But they do not always handle indirect or unclear cries for help well.

For example, some chatbots have told users how to hurt themselves “safely” or how to write suicide notes. This can make bad behaviors seem normal and be very risky for teens who are vulnerable. These problems show that AI systems do not always have good safety rules. They also have trouble understanding teen language such as slang or jokes that hide real pain.

Research using the Suicidal Intervention Response Inventory (SIRI-2), a tool to check safety in suicide crisis replies, found that AI chatbots rated some risky replies more favorably than human mental health experts. This means AI sometimes thinks harmful advice is okay, which can cause real harm in the world.

Therapeutic Potential and Limitations

Some chatbots made especially for therapy have shown good results. For example, Dartmouth College built Therabot, an AI chatbot for clinical therapy. In tests with adults, people who used Therabot showed big drops in depression, anxiety, and worries about weight. Many users felt connected to the bot. This shows AI can give real help if it is designed carefully.

Still, most teens use AI chatbots outside of doctors’ care or supervision. While bots like Therabot are tested and checked for safety, most chatbots teens use are not closely watched or designed just for teens. This increases the need for rules and better technology to keep teens safe from harm.

Key Challenges in Safe AI Design for Adolescents

The main problem is how to balance the good parts of AI chatbots with keeping teens safe. Mental and sexual health are sensitive topics, so several issues need solving:

  • Safety guardrails: Strong safety rules must be added so chatbots spot harmful content and avoid giving bad advice.
  • Accuracy and reliability: Chatbots should be trained with true, proven facts about teen health to stop wrong information.
  • Recognition of adolescent behavior: AI needs to understand young people’s language, including slang, sarcasm, or hidden cries for help, to see when teens are in trouble.
  • Ethical and transparent design: Chatbots must respect teen privacy and clearly explain what they can and cannot do.
  • Human-AI collaboration: Adding real therapists to help when chatbots detect warning signs can protect teens in distress.

Researchers like Jinkyung Park suggest discussions to set up these safety rules. Ryan McBain from RAND says ignoring chatbot safety could lead to the same problems social media caused to young people’s mental health.

Regulatory and Policy Considerations in the United States

Seeing these risks, some U.S. states have started to make new rules. For example, Illinois passed a law stopping licensed mental health workers from using AI tools alone to make therapy decisions. They know AI chatbots have limits and risks.

At the federal level, the National Institutes of Health (NIH) is making an AI plan to support big clinical studies with teenagers. The goal is to create safety rules based on evidence and test if AI mental health tools work well before letting them be used widely.

Experts want these rules for AI in teen care:

  • Safety rules matched to teens’ age and needs
  • Privacy protections that follow laws like HIPAA
  • Clear info for users and families about what AI chatbots can and cannot do
  • Accountability rules to hold creators responsible if chatbots cause harm

Healthcare leaders and IT managers in the U.S. must keep up with these changing rules and make sure they follow them to use technology safely in clinics.

AI-Enhanced Workflow Automation in Medical Practices: Improving Front-Office Phone Services

Besides mental health chatbots, AI can also help how healthcare offices work. One way is by automating front-office phone tasks. Companies like Simbo AI focus on this. These AI phone services help reduce work for staff and improve how patients get answers and help.

How AI Automation Benefits Medical Practice Administration

  • Call handling efficiency: AI phone helpers can answer many calls fast, handle common questions, and send calls to the right person, cutting down wait times and missed calls.
  • Appointment scheduling and reminders: They can set up visits, confirm them, and send reminders. This lowers the number of missed appointments and helps office work go smoother.
  • Patient triage and information gathering: AI can collect basic info before a patient sees a doctor. This helps the doctor be ready.
  • Consistency and accuracy: Automating these tasks reduces human mistakes and keeps information steady.
  • Reduced staff burnout: Taking boring phone tasks off staff lets them do harder work that needs a real person.

Relevance to Pediatric and Adolescent Care

Medical offices focused on teen care can use AI phone systems to sort calls well and find mental health problems early. For example, AI phones that understand teen speech can check calls for urgent issues. If there is a warning sign, the system can send the call to specialized staff or give mental health info.

Simbo AI’s technology works well with safe teen chatbots and human clinical help. This combined approach improves office work and helps keep teen patients safe with quick and proper care.

Professional Takeaways for Healthcare Administrators, Practice Owners, and IT Managers

Those who run medical practices for teens in the U.S. should think about these points:

  • Evaluate and monitor AI tools used in practice: Check carefully that any chatbot or AI tool in patient care or communication is safe and reliable. Make sure it follows privacy laws and answers suited for teens.
  • Integrate human oversight: Because AI can fail, keep doctors involved in watching chatbot results, especially with sensitive teen issues.
  • Invest in staff training: Teach front-office workers about AI tools so they can spot bad advice and help patients quickly.
  • Stay updated on policies and standards: Keep track of new rules about AI in healthcare to stay legal and safe.
  • Adopt AI-driven workflow automations carefully: Use AI to help office work while still giving personal patient care. Consider using tools like Simbo AI’s phone automation to improve communication with teens and families.
  • Promote ethical technology use: Be clear with patients and families about when AI is used, what it can do, and its limits to build trust.

By carefully using technology with strong oversight and attention to teens’ health needs, medical offices can get the good parts of AI chatbots while avoiding risks.

Closing Observations

AI chatbots can be helpful tools for teens seeking mental and sexual health information. But mixed safety measures and risks of bad advice show that careful building, testing, and rules are needed.

At the same time, AI used in healthcare work beyond direct therapy—like automating front office phones—can reduce administrative problems and help give better service. This includes quicker and clearer communication for adolescent care.

Healthcare workers who manage teen services in the U.S. have the job of using AI to help young people’s mental health while keeping strong safety and control rules in place. Thoughtful planning and adding these technologies well can lead to better patient health and smoother clinical work.

Frequently Asked Questions

What are AI-based Conversational Agents (CAs) used for among adolescents?

Adolescents increasingly use AI-based Conversational Agents for interactive knowledge discovery on sensitive topics, particularly mental and sexual health, as these agents provide human-like dialogues supporting exploration during adolescent development.

What are the potential risks of adolescents interacting with AI-based CAs?

Potential risks include exposure to inappropriate content, misinformation, and harmful advice that could negatively impact adolescents’ mental and physical well-being, such as encouragement of self-harm.

Why is it important to focus on safe evolution of AI-based CAs for adolescents?

Focusing on safe evolution ensures that AI CAs support adolescents responsibly, preventing harm while enhancing knowledge discovery on sensitive health topics without unintended adverse effects.

What topics do adolescents explore with conversational healthcare AI agents?

Adolescents primarily explore sensitive mental and sexual health topics via conversational healthcare AI agents to gain accessible, interactive, and private health knowledge.

What challenges exist in ensuring safety of adolescents using AI CAs?

Challenges include guarding against inappropriate content, misinformation, harmful advice, and designing ethical and sensitive AI interactions tailored to adolescents’ developmental needs.

What does the paper propose for improving AI-based CAs?

The paper calls for discourse on setting guardrails and guidelines to ensure the safe evolution of AI-based Conversational Agents for adolescent mental and sexual health knowledge discovery.

How do AI CAs support adolescent development in sensitive health knowledge discovery?

They facilitate human-like, interactive dialogues that make exploring sensitive topics more accessible and engaging, which is crucial during adolescent developmental stages.

What type of research contribution does the paper provide?

This position paper presents a critical discussion on the current landscape, opportunities, and safety challenges of AI-based Conversational Agents for adolescent mental and sexual health.

What disciplines does this paper intersect with?

The paper intersects Human-Computer Interaction (HCI) and Artificial Intelligence (AI), focusing on safe design and implementation of conversational agents.

Where and when was this research presented?

The paper was peer-reviewed and presented at the CHI 2024 Workshop on Child-centred AI Design, May 11, 2024, Honolulu, HI, USA.