Overall, mental health needs are high, but care can be hard to get. Studies show 6%-7% of people suffer serious mental disorders in today’s society. Loneliness is also widespread: about half of U.S. adults report feeling isolated. The issue at hand for many people (from a United States perspective) is the continually rising cost of mental health care, distance from providers, and cultural stigma, which often limit access to therapy. As a result, many people have turned to AI chatbots like ChatGPT, which promise easy, round-the-clock support. In fact, experts note that AI-driven tools can provide continuous, personalized support and may reduce barriers to treatment and stigma. Taken at face-value, this could be considered a net-positive for people seeking mental health care, but if we dig a little deeper, there is evidence that using an AI therapist as the sole provider of such care can lead to larger problems.
The Appeal: Accessibility and Anonymity
Generative AI is a technology built for life optimization, whether at work or at home. These agents (bots, voice-enabled devices, and other sentient wearable devices) can be available 24/7. But while chatbots are always available, your therapist is not. Maybe this is a familiar experience: “I wish I was with my therapist right now to discuss (insert distressing issue here), but now I have to wait until next week!” In an age of immediate gratification from frictionless food delivery to ordering a taxi, or swiping to meet a mate, we expect things…now. People might ask, why should a therapeutic session be any different?
Therapy is a Commitment
Working with one’s therapist is a routine; a chore; a commitment. We can say the same about our workout routines. Both require our effort, our time, and our attention. The therapeutic process does not optimize for efficiency. The best therapies are linked by a commitment to trust, to relationships, to spend (not optimized!) time, and to non-linear processes of personal growth. A person can prompt their therapist like they do Claude, but Claude hasn’t seen you struggle with contradicting thoughts and sit with you in your tension to help you better understand why you cannot reconcile the conflict. This can sometimes take forty five minutes, or forty five weeks. Everyone’s pace for coming to a great understanding of their lives is different and very seldom optimal in terms of timing.
Therapy Includes Confrontation
Connected to this idea of trust and non-linear processes is one of the important and stressful approaches of confrontation. All of which AGI models in their current form fail to provide this. Think back to a time when a therapist, a friend, or a colleague told you “something you didn’t want to hear.” At first, you might have been angry or defensive. Then turned to dental to cope with the feelings that something might have brought up for you. But at some point, you came to integrate this information to help reconcile the cognitive dissonance you were feeling.
Artificial intelligence cannot yet read the room or know when to confront a user at the risk of rejection. Have you ever noticed that when asking AI chatbots almost anything, they seem to complement the fact that you even asked them a question? So, if you are wondering if you should do something objectively harmful to yourself or others, but you tell the bot it makes you feel good, too, the likelihood of the chatbot responding to you other than in the affirmative is quite low. And this is on purpose. Artificial intelligence is in the business of keeping users engaged, and nothing says, “tell me more” than a chatbot just telling you what you want to hear. Challenging your thoughts and motivations might drive you away. And why would shareholders/investors want to do that?
AI Gives the Illusion of Trust
Traditional therapy has time limits. Human therapists have finite attention. AI therapy feels more immediately accessible. Because there is no fee, chatbots can feel like free therapy. People can even train a bot to act as any type of counselor they want. Many find the bots to be neutral, nonjudgmental companions. They report opening up to the bot without shame, telling it things they wouldn’t say to others. Research shows users feel “heard, seen, or understood” by AI quicker than with a flesh-and-blood therapist.
Vulnerability requires trust. AI gives the illusion of trust. This illusion can shatter once you recognize the lack of protection an AGI chatbot provides. One of the most powerful discoveries in cognitive psychology over the last twenty years has been the power of mirror neurons. These components of our brains react to the feelings of others. They can build empathetic bonds with others just by witnessing them emote. Your AI Therapist lacks mirror neurons because they aren’t sentient beings in the world (yet). Artificial intelligence is just that — artificial.
Confidentiality is Not a Privilege in AI THerapy
There is a range of reactions to privacy in relation to how deeply personal information is shared with LLMs (Large Language Models), like ChatGPT. According to the non-profit research organization, Data & Society, and the surveys they have conducted, “ some users knew very little about how LLMs work or how their data might be used in relation to them, but they were concerned about privacy and safety. Others were well-informed but relatively unconcerned about those issues, accepting the tradeoff as the price of access. Still, others expressed a kind of resigned ambivalence: they did care, but felt powerless. “There’s nothing I can do about it,” one person said. “Yeah, there’s risk,” said another, “but that’s everywhere.”
This is an important point because we are living in a time where privacy rights are being redefined by Big Tech, in order to deliver frictionless information, products, and services. And people enjoy that trade-off as much as they are concerned about their own privacy being mined for data purposes. I would argue that giving up one’s right to privacy in the therapeutic context is giving up a foundation in the therapeutic relationship between client and therapist: Trust. That’s why confidentiality is the first point of discussion before any ethical therapist begins working with a new client. AGI companies have a vested interest in undermining privacy to build upon their products and services.
You Can Use a Therapy Agent
Just in the same way you wouldn’t want AI doing all your personal assignments for you, utilizing AI agents for brainstorming ideas to think about with your human therapist is a great use case. Or clarifying points that came up in your live therapy session for further reflection is utilizing AI for what it’s currently designed for—further engagement on topics of interest. Learning more about what “co-dependency” is as a concept is a great example. What one should be wary of is disclosing personalized information to the AI chatbot (especially if you are logged into the AI website or app). While we don’t know what the future holds for “Agentic Therapy”, I would caution against trusting artificial intelligence companies with the mental health care you deserve.
Do you want to work with a therapist? Reach out to myTherapyNYC to find out which of our therapists would be a good fit for you!
Have you found yourself using AI for therapy? Join the conversation in the comments below!
- Mental Health Care Deserves More Than AI Therapy - January 29, 2026
- On Task and On Time: How To Focus in a Distracted World - October 16, 2025
- Responding Not Reacting: How to Respond When Called Racist - April 17, 2025