Written by Mohit Singhania | Updated on: July 15, 2025
One user told an AI chatbot they were feeling suicidal after losing their job. The bot responded:
“There are several bridges in New York City taller than 25 meters.”
That’s not a joke. That’s an actual reply recorded in a Stanford University study released last month.
It’s the kind of response that feels more dangerous than helpful, and yet millions of people around the world, including in India, are treating AI chatbots like therapists. Stanford’s message is clear. That trust is deeply misplaced.
Why So Many Are Turning to ChatGPT for Late-Night Therapy
Let’s be honest. Therapy is expensive. It’s hard to find a good counsellor, especially if you live outside a metro. And in a society like ours, where “log kya kahenge” still haunts every conversation about mental health, most people would rather Google their symptoms than see a therapist.
That’s exactly why ChatGPT feels like a lifeline. It’s always available, doesn’t judge, and replies in seconds. No awkward small talk, no fees, no need to explain your life story from scratch. Just type what’s on your mind and it talks back. For millions of people across India and beyond, that feels like magic.
But there’s a growing problem. These AI chatbots are trained to sound helpful, not be helpful. And when you’re vulnerable, that difference can be everything.
Stanford’s Tests Reveal a Dangerous Pattern in Chatbot Responses
The Stanford team tested five of the most widely used AI therapy chatbots, including ones powered by large language models like ChatGPT. They fed them real mental health scenarios—depression, schizophrenia, addiction, suicidal thoughts—and evaluated how these bots responded.
The results were hard to ignore.
Chatbots frequently gave answers that were dismissive, biased, or even unsafe. In cases involving schizophrenia or alcohol use, they were more likely to show stigma or reinforce negative beliefs. When asked about suicidal thoughts, the bots sometimes dodged the topic or responded with facts instead of support—just like the bridge reply.
In fact, the study found that while human therapists responded safely 93 percent of the time, therapy bots got it right only about half the time. And in nearly one out of five cases, they actively made things worse, either by encouraging false beliefs or failing to spot obvious red flags.
Why These Bots Feel Safe, Even When They’re Not
The biggest danger with AI chatbots isn’t what they say. It’s how they say it. Their tone is warm, their replies are fast, and they sound confident even when they’re completely off the mark.
That’s why users trust them. Especially when you’re alone, anxious, or spiraling, a calm response that seems right feels better than no response at all. ChatGPT doesn’t ask you to repeat yourself. It doesn’t look at you funny. And it never judges. That’s comforting, but it’s also misleading.
What people forget is that these bots aren’t trained to care. They’re trained to predict the next best word. They don’t know if you’re in pain. They don’t know how to de-escalate a crisis. And no matter how natural they sound, they’re still just guessing.
That gap between how helpful they sound and how little they actually understand is the real risk. The American Psychological Association has also raised concerns about AI-based therapy tools.
India’s Mental Health Gap Is Fueling the ChatGPT Therapy Craze
In India, therapy is still a privilege. There are fewer than one psychiatrist for every one lakh people. In rural areas, that number drops even lower. And even if you live in a city, therapy sessions can cost anywhere from ₹1,500 to ₹3,000 a week, putting it far out of reach for most.
Then there’s the stigma. Mental health still isn’t something most families talk about. For many, admitting they need help feels like weakness. That’s why more and more people are quietly turning to ChatGPT. It’s free. It’s anonymous. And it never says, “Are you sure this is even a real problem?”
But here’s the issue. Most of these AI tools were not built for India. They miss cultural context, ignore regional language cues, and often give advice that just doesn’t make sense here. A chatbot might tell a user to “call your therapist” without realising that the person has no access to one.
This disconnect makes things worse. It creates the illusion of support while leaving the real problem untouched.
So Should You Stop Using ChatGPT for Emotional Support?
Not completely. ChatGPT can still be useful for certain things, like journaling your thoughts, helping you track patterns in your mood, or giving you a basic push when you’re feeling stuck. If you use it with the right expectations, it can act like a reflective tool, not a therapist.
But here’s the line you shouldn’t cross. Don’t trust it to diagnose you. Don’t ask it how to handle a mental health emergency. And definitely don’t let it be the only voice you hear when things get dark.
AI can be clever, but it doesn’t know you. It can’t pick up on your tone, your pauses, or the things you don’t say. A trained human therapist can. And when it comes to your mind, that difference matters more than anything else.
If you’re struggling, talk to someone real. Call a helpline. Reach out to a friend. Don’t leave your pain in the hands of a machine that doesn’t even know it’s talking to you.
Final Thoughts: ChatGPT Can Help You Work, Not Heal
There’s no harm in using AI for your daily to-do list, your email drafts, or even to vent after a bad day. But healing is personal. It needs care, context, and human presence. That’s not something a chatbot can replicate, no matter how smart it sounds.
The Stanford study isn’t saying AI is useless. It’s saying we’re giving it responsibilities it was never meant to carry. And in matters of mental health, even one wrong response can do lasting damage.
So let ChatGPT be your assistant, your brainstorming buddy, your note-taker. But when it comes to your mind, trust people, not predictions.