AI conversational agents are computer programs that talk with users using natural language. In healthcare, especially for teenagers, these agents provide a private and easy way to ask questions and get information. Teens often find it hard to talk about sensitive subjects like mental health or sexual health with parents, teachers, or doctors because they feel shy or scared of being judged. Conversational agents offer a safe space where they can learn by asking questions interactively.
Research by Jinkyung Park, Vivek Singh, and Pamela Wisniewski, shared at the CHI 2024 Workshop on Child-centred AI Design in Honolulu, Hawaii, shows that more teenagers use AI conversational agents to understand complex and private problems. These agents allow back-and-forth conversations like with a human. This helps teens ask better questions and understand answers more clearly. This is very important during adolescence — a key time for mental and physical health.
Even with the benefits, using AI conversational agents for teen health has safety issues. Park and her team found risks like exposure to wrong or harmful information. Some agents without good safety checks have given advice that might encourage self-harm or wrong medical facts. This can make mental health worse instead of better.
For medical leaders and IT managers in the U.S., this is a serious concern. Healthcare groups need to make sure AI tools follow strict rules to block harmful content. The article says there should be safety “guardrails” — rules and systems that stop AI from giving bad advice. These could include:
Without these safety steps, AI agents could hurt teen users and harm the reputation of healthcare providers.
Ethics are very important when making AI conversational agents for teen health. The research shows that these tools must protect user privacy, be clear about how they work, and handle sensitive information with care. Teens need to trust that their talks with AI are private and that their data won’t be shared without permission.
Ethical AI also means respecting the differences among teens. The agent should work differently for younger and older teens, considering age, thinking skills, and culture in the U.S. For example, the language and how topics are talked about should change with age. The AI’s responses should not judge users, should understand trauma, and follow current medical rules.
Healthcare leaders working with AI developers should ask for documents showing how ethics were included in the design. They should also ensure there are ongoing reviews about ethics during the AI system’s use.
Healthcare groups in the U.S. face choices when using AI conversational agents for teen mental and sexual health. On one side, these tools make information easier to reach, especially for teens in rural or low-access areas. Teens can get true facts and learn coping skills without feeling ashamed.
On the other side, medical administrators must handle risks like:
Park and her team call for ongoing talks between healthcare workers, AI makers, and rule-makers to improve AI safety and trustworthiness.
Apart from talking directly to teens, AI conversational agents can also help in other parts of healthcare operations. Medical leaders and IT managers should think about how to add these systems safely into healthcare setups to make services better and involve patients more.
For example, AI agents can do simple front-office jobs like scheduling appointments or answering common questions. This lets staff focus more on patients who need extra help. Specifically for teen health education, AI can:
Using AI for these tasks can help U.S. medical offices become more efficient while keeping care standards high. But IT managers must keep patient info private, follow HIPAA rules, and keep data safe.
Teens are a group with special health needs. Using AI with them needs more than just technology skills. It requires understanding teen development, ethics, and medical rules.
The CHI 2024 Workshop paper shows that AI agents can help teens find health info well only if made with approaches that focus on their needs. For mental and sexual health topics, AI agents need to have:
U.S. healthcare leaders should invest in AI projects that bring together AI experts, mental health professionals, and teen health specialists to make better tools.
Medical administrators and IT managers in the U.S. who want to use AI conversational agents for teen mental and sexual health should first do careful risk checks and examine vendors. Important points include:
Working with AI developers who take part in research meetings or workshops shows a focus on safety and ethics.
When used carefully and managed well, AI conversational agents can be useful tools for healthcare groups serving teens in the United States.
For U.S. healthcare administrators and IT managers, the challenge is to choose AI tools that balance access and safety. This way, these digital tools help teens stay healthy while handling the special ethical and practical questions this area needs.
Adolescents increasingly use AI-based Conversational Agents for interactive knowledge discovery on sensitive topics, particularly mental and sexual health, as these agents provide human-like dialogues supporting exploration during adolescent development.
Potential risks include exposure to inappropriate content, misinformation, and harmful advice that could negatively impact adolescents’ mental and physical well-being, such as encouragement of self-harm.
Focusing on safe evolution ensures that AI CAs support adolescents responsibly, preventing harm while enhancing knowledge discovery on sensitive health topics without unintended adverse effects.
Adolescents primarily explore sensitive mental and sexual health topics via conversational healthcare AI agents to gain accessible, interactive, and private health knowledge.
Challenges include guarding against inappropriate content, misinformation, harmful advice, and designing ethical and sensitive AI interactions tailored to adolescents’ developmental needs.
The paper calls for discourse on setting guardrails and guidelines to ensure the safe evolution of AI-based Conversational Agents for adolescent mental and sexual health knowledge discovery.
They facilitate human-like, interactive dialogues that make exploring sensitive topics more accessible and engaging, which is crucial during adolescent developmental stages.
This position paper presents a critical discussion on the current landscape, opportunities, and safety challenges of AI-based Conversational Agents for adolescent mental and sexual health.
The paper intersects Human-Computer Interaction (HCI) and Artificial Intelligence (AI), focusing on safe design and implementation of conversational agents.
The paper was peer-reviewed and presented at the CHI 2024 Workshop on Child-centred AI Design, May 11, 2024, Honolulu, HI, USA.