The therapeutic alliance is an important part of mental health treatment. It is the trusting relationship between a patient and their therapist. This trust helps patients feel safe to share their thoughts and feelings. When patients and therapists work well together, it can make therapy more effective. A strong alliance helps patients stay involved and follow their treatment plans, leading to better results.
Usually, patients and therapists build this relationship by meeting face-to-face. But there are more patients and not enough therapists. Because of this, people are interested in digital tools to help with care.
AI chatbots in mental health use computer programs to have conversations with people. They can talk like a human and offer support anytime, through phones or computers. Chatbots are different from human therapists because they are available all day and night. They can help with problems like mood, anxiety, or stress using therapy methods such as cognitive behavioral therapy (CBT).
One example is Therabot, created by researchers at Dartmouth. It was tested in a clinical trial with 106 people who had depression, anxiety, or eating disorders. The trial found that the chatbot helped as much as regular therapy. This shows that AI chatbots can provide real mental health help.
The trial with Therabot showed that many users trusted the chatbot like they would a person. About 75% of the users were not getting other treatments at the same time, but they still used Therabot for about six hours over eight weeks. This is similar to attending eight therapy sessions. It shows that people felt comfortable using the chatbot.
Nicholas Jacobson, the lead researcher, said that many users treated Therabot like a friend. They shared personal thoughts without fear of being judged. This is important because people often feel scared to share private problems with others.
Another good point is that chatbots are available all the time, even at night. Human therapists cannot always be available, but chatbots can help when people feel most vulnerable or upset.
Using AI chatbots to build trust helps improve symptoms in patients. The trial with Therabot showed:
These results are important and show that AI can be a useful tool alongside regular mental health care. The chatbot gives answers quickly, so patients can use therapy skills in everyday situations.
Still, human oversight is needed. Doctors must watch AI responses to keep patients safe. The system can detect when someone is at risk, like having thoughts about suicide, and then trigger emergency help. This safety feature is very important.
Using AI in mental health brings up questions about trust, privacy, and ethics. Experts like Marlene M. Maheu, PhD, note that many therapists might not know how much AI is used in their tools. AI is involved in tasks like paperwork and decision support.
One concern is that the therapeutic alliance might be affected if patients don’t know AI is used or if their privacy is not protected. Laws like California’s Consumer Privacy Act (CCPA) and New York’s SHIN-NY law require clear information and consent about AI and data use.
If patients feel unsure about AI, they might not share openly, which makes treatment less effective. Therefore, it is important for doctors to explain how AI is used and how patient privacy is kept safe.
AI also helps with office work in mental health clinics. Systems like Simbo AI use phone automation to help with patient scheduling and answering questions. This reduces work for staff and helps patients keep their appointments.
AI can also help with writing medical notes by turning spoken words into text and coding information. This saves time so therapists can focus more on patients. AI billing systems help make sure payments are correct and on time.
Clinical decision support tools use AI to look at data for suicide risk or suggest special treatments. These tools work with doctors to improve care but do not replace human judgment.
Mental health clinics in the U.S. must carefully plan how they use AI chatbots. Administrators and IT managers need to make sure AI is safe and respects patient privacy. Groups like the American Psychological Association (APA), National Association of Social Workers (NASW), and American Counseling Association (ACA) say it is important to get patients’ consent and explain how AI tools work.
AI chatbots can help more people get care and keep them engaged, but they do not replace therapists. They are tools to support care, especially for people who cannot easily access traditional therapy.
There are still challenges using AI chatbots. One issue is bias in AI algorithms. Since AI learns from existing data, it can copy mistakes or inequalities unless it is carefully designed and supervised.
To handle this, AI must be regularly checked and controlled by clinicians. Developers and doctors should work together to improve AI and keep it safe for all kinds of patients.
Clinics should ask about the AI they use, understand how it works, and follow ethical rules from professional groups and regulators. Patient rights and trust must always be protected in mental health care.
AI chatbots can build trust similar to regular therapy by offering support that people use often and feel safe with. When used well, AI can help improve mental health, ease symptoms, and make clinic work more efficient. For healthcare leaders in the U.S., the challenge is to balance using new technology with protecting ethics, privacy, and making sure human therapists remain central in care.
Therabot is a generative AI-powered therapy chatbot designed to provide mental health support. The clinical trial showed significant symptom improvement: a 51% reduction in depression symptoms, 31% in anxiety, and 19% in eating disorder concerns, suggesting AI-assisted therapy can have clinically meaningful benefits comparable to traditional outpatient therapy.
Participants engaged with Therabot through a smartphone app by typing responses or initiating conversations about their feelings. The AI provided personalized, open-ended dialogue based on therapeutic best practices, enabling continuous, real-time support tailored to users’ mental health needs.
The trial focused on individuals diagnosed with major depressive disorder, generalized anxiety disorder, and eating disorders. These conditions were selected due to their prevalence and varying treatment challenges, with Therabot showing differential but significant symptom reductions across these diagnoses.
Therabot detects high-risk content during conversations and responds by prompting users to call emergency services or suicide prevention hotlines with easy access buttons. The system operates under the supervision of clinicians who can intervene if necessary to ensure patient safety.
Clinician oversight is critical to monitor AI responses, manage risks, and intervene in high-risk situations. While AI can offer immediate support, supervised deployment ensures safety, efficacy, and adherence to therapeutic best practices, preventing potential harms from autonomous AI operation in mental health.
Therapeutic alliance refers to the trust and collaboration between a patient and caregiver. The study found users formed a bond with Therabot similar to that with human therapists, reflected in frequent engagement and detailed personal disclosure, essential for successful therapy outcomes.
Therabot offers 24/7 availability beyond office hours, empowering patients to access support whenever needed. Its mobile format allows users to engage anywhere, facilitating continuous care and immediate coping strategies for real-life challenges, addressing provider shortages and access barriers.
AI therapy agents must meet rigorous standards that ensure responses align with evidence-based practices, maintain appropriate tone, and protect users from harmful advice. Continuous evaluation and clinical involvement are essential to address risks and validate therapeutic outcomes before widespread use.
No AI therapy agent is ready for fully autonomous operation due to risks in complex, high-risk scenarios. Future work requires better understanding of these risks, enhanced safety controls, integration with clinical care, and improved AI capabilities to ensure effective, safe mental health interventions.
Therabot users engaged for around six hours, equivalent to eight therapy sessions, achieving symptom reductions on par with gold-standard cognitive therapy. Patients reported high levels of trust and ongoing engagement, indicating that AI can complement person-to-person therapy effectively.