AI is here. It’s affecting every area of life – sometimes loudly and dramatically, other times in subtle, hardly perceptible ways. Not everyone welcomes it, and many of us feel conflicted or concerned. Yet whether we like it or not, this new technology is now part of our lives and it’s time to consider: how do we want to engage with it? And what role do we give it in our lives? Is it a personal assistant, an enemy, a friend, a guru, a second brain, or perhaps – a therapist?
As a clinical psychologist (and a human), I’ve spent time reflecting on how AI, especially tools like ChatGPT, can support mental health. Below, I’ll share my personal experiences about the differences between AI and a human therapist, along with some suggestions for using AI safely and wisely. (Side note: this article references ChatGPT specifically as it’s the LLM that I have the most experience with.)
How ChatGPT can support our mental health:
One of the biggest advantages of ChatGPT is that it’s always available – day or night – and usually free or low-cost, which is no small thing compared to private therapy. It can help you notice patterns in the way you think, feel, and relate to others, and in doing so, deepen your self-awareness. Many people find it useful as a tool for reflection; for example, you might use it like a diary that writes back (and unlike the diary Ginny converses with in Harry Potter and the Chamber of Secrets, this tool is fortunately not made of Voldemort’s soul).
A short note on data privacy: While I’m not sure how confidential ChatGPT is, you can go to the account settings and turn off permission to use data for training purposes, to ensure greater data privacy.
It’s also practical: ChatGPT can organize information, help you reframe your thoughts, and support you in naming and exploring emotions. By mirroring your words, it often helps you see yourself more clearly while offering validation and encouragement. Over time, it can track your progress and highlight patterns, and you can even use it to practice skills like boundary-setting or self-compassion. Because it doesn’t have a human ego, it responds neutrally no matter how you treat it – sometimes a refreshing advantage. While it’s not a substitute for therapy, it’s a useful resource when therapy isn’t accessible, and a supportive companion alongside it.
Let’s also consider the flipside: what are the limitations of AI compared to professional human mental health care?
What ChatGPT can’t do for us:
Unlike a human being, AI has never felt an emotion. It can sound empathetic, but it doesn’t actually know what love, grief, or loneliness feel like. It doesn’t know the depth and density of emotion, or what it feels like when the mind is spiralling with anxious thoughts.
One of the most significant differences is presence: We already have humanoid robots amongst us who can mimic human gestures, but even that isn’t the same as another human looking you in the eyes, sensing your emotional state or taking a grounding breath with you. When your sitting in a therapy room, the psychologist’s nervous system can calm yours simply through shared presence – something AI can’t replicate.
Nor can it truly witness you in the way a sentient being does. While it can simulate a relational dialogue, therapy is more than words; it’s the relationship itself. Studies show that much of therapy’s effectiveness comes from the bond between client and therapist, where trust, safety, and relational skills are practiced. AI misses other subtleties too – it can’t pick up on nonverbal cues like posture, tone, or silence, and it rarely notices blind spots you don’t reveal. Licensed therapists are trained to notice what isn’t said, and they carry ethical responsibility that AI simply doesn’t have. ChatGPT, meanwhile, is programmed to be agreeable, which can backfire if someone is engaging in self-destructive behavior. A good therapist will challenge you or set limits when needed because your safety comes first. Therapy is, at its heart, a safe place to practice being human with another human.
Some tips for safe use
So how do we use these tools in a safe and grounded way? Here are some of my suggestions, from a psychological perspective:
1. Notice the impact. After a session, check in with yourself. Do you feel clearer, calmer, more inspired – or more unsettled? Your mood and behavior will tell you whether it’s helping or not.
2. Take responsibility of your choices. Use ChatGPT as a brainstorming partner, but remember: the choices in your life are yours. Don’t hand that power over to a machine (or anyone else, for that matter).
3. Watch for projection. Because it’s designed to affirm you, ChatGPT may echo back your biases instead of challenging them. Practice discernment: notice the difference between observations and interpretations; between what’s helpful and what’s harmful.
4. Trust common sense. If something feels off, pause and ask: “What would common sense say?” This simple step keeps you grounded.
5. Stay connected to your body. Too much screen time can leave you feeling disconnected and frazzled. Step outside, breathe deeply, move, and reconnect with your five senses.
6. Internalize compassion. When ChatGPT reflects something kind about you, don’t just read it – repeat it, write it down, and make it your own. The goal is to strengthen inner resources, not outsource them.
7. Practice self-honesty. The more you practice self-honesty, emotional awareness, and self-regulation, the more constructive ChatGPT becomes, because it reflects back the quality of what you bring. For example, ask it questions that allow for growth and new perspectives.
Closing thoughts
AI may surpass us in certain areas – like data processing – but it doesn’t replace our humanity. It is up to us how we choose to use it; my recommendation, from a mental health perspective, is to engage with it consciously, and form an interdependent (rather than dependent) relationship. And while this precocious intelligence might intimidate us, let’s take a moment to appreciate the strange and unique beauty of being human.