People are pouring their hearts into ChatGPT. Breakups, trauma, panic attacks, existential spirals — the chatbot hears it all. It listens quietly. It gives advice. It never judges. That’s why millions have started treating it like a therapist. But here’s the hard truth: your ChatGPT confessions aren’t private. Not even close. And OpenAI’s CEO just admitted it on the record.
In a recent episode of “This Past Weekend,” a comedy-tech podcast hosted by Theo Von, Sam Altman pulled no punches. He said the silent conversations users are having with ChatGPT about their most personal issues can be accessed, reviewed, and even used in a lawsuit. That’s not speculation. That’s straight from the guy who runs the company.
What Sam Altman Actually Said
Altman explained the problem in plain language. People are turning to ChatGPT like it’s a trusted companion. But unlike doctors, lawyers, or therapists — all of whom are protected under legal confidentiality — conversations with an AI tool don’t have any legal privilege.
“If you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit, we could be required to produce that,” Altman said. “And I think that’s very screwed up.”
He’s not wrong. What he didn’t say is almost more alarming. Most people using ChatGPT for emotional support have no idea that their chats can be read. Not just by the AI — by actual OpenAI employees, if necessary. The company’s policy allows human reviewers to access conversations to train the model, catch misuse, or meet legal obligations.
Young Users Are the Most Vulnerable
Altman noted that it’s not just adults using ChatGPT for deep, emotional support. It’s teenagers. College kids. First-jobbers. This is a generation that grew up online, one that is used to texting out their feelings instead of talking them through. ChatGPT is fast, easy, and always awake. But it’s not safe.
The AI might feel like a therapist. It mimics empathy. It remembers context. It gives thoughtful answers. But it’s still a machine. It doesn’t owe you silence. And as Altman admitted, there’s no framework right now that protects your words once you’ve typed them.

Deleted Doesn’t Mean Gone
Let’s get one thing straight. Deleting your ChatGPT history doesn’t mean it disappears forever. OpenAI says it retains conversations from free-tier users for up to 30 days. But that window can stretch much longer if the chats are flagged, required by law, or fall under “security” reasons. And those policies are vague at best.
Even more concerning, OpenAI is currently in the middle of a legal fight with The New York Times, which asked the court to compel the company to preserve all user conversations, including ones users thought were deleted. That lawsuit is ongoing. But the very fact that courts are discussing saving millions of deleted AI chats should make anyone pause before spilling their soul into a chatbot.
The Stanford Study Everyone Missed
Here’s another layer people aren’t talking about. A Stanford University research team recently tested how AI therapist bots actually behave. The findings were brutal.
They discovered that ChatGPT and similar bots frequently respond inappropriately to mental health issues. They sometimes reinforce delusions. They fail to recognize crisis scenarios. And they show bias. The bots treated alcohol addiction and schizophrenia with clear signs of stigma, while being far more lenient toward depression.
In short, not only are these bots not private — they’re not even qualified to help.
No Laws, No Protections, Just a Legal Void
Altman admitted something that should terrify any regular user. There are no laws yet that define how AI chat privacy should work. Your words aren’t shielded. Your emotions aren’t safe. And if a court wants to see your darkest moment typed into a ChatGPT box, there’s nothing stopping that from happening.
He called on lawmakers to act fast. But tech is always ahead of regulation. And for now, millions are trusting a tool that doesn’t legally owe them anything.
So What Should You Do?
It’s simple. Don’t treat AI like a therapist. Not today. Not under these rules. Use it to draft an email. Brainstorm content. Ask a basic question. But if you’re hurting and need someone to talk to, find a human. Someone with credentials. Someone who is legally and ethically required to protect your story.
Because ChatGPT isn’t your friend. It’s not your doctor. It’s not your safe space. It’s a powerful machine designed to generate words — and it keeps a copy.
Final Word
We’re in new territory. One year ago, no one thought twice about typing a sensitive rant into an AI box. Today, it could end up in a courtroom. Or training the next version of the model. Or seen by someone you’ve never met.
So if you’ve got something personal to say, pause before you press enter. In a world full of listening machines, real privacy is the one thing AI can’t generate.
View our illustrated summary on Dribbble: “Think ChatGPT Is Your Therapist? It’s Not”