Reading List

The NYT Goes ‘Reefer Madness’ on ChatGPT from Daring Fireball RSS feed.

The NYT Goes ‘Reefer Madness’ on ChatGPT

Kashmir Hill, reporting today for The New York Times:

Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

That’s the lede to Hill’s piece, and I don’t think it stands up one iota. Hill presents a lot of evidence that ChatGPT gave Torres answers that fed his paranoia and delusions. There’s zero evidence presented that ChatGPT caused them. But that’s the lede.

At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating ideas that weren’t true but sounded plausible.

“This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.”

Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.

Someone with prescriptions for sleeping pills, anti-anxiety meds, and ketamine doesn’t sound like someone who was completely stable and emotionally sound before encountering ChatGPT. And it’s Torres who brought up the “Am I living in a simulation?” delusion. I’m in no way defending the way that ChatGPT answered his questions about a Matrix-like simulation he suspected he might be living in, or his questions about whether he could fly if he truly believed he could, etc. But the premise of this story is that ChatGPT turned a completely mentally healthy man into a dangerously disturbed mentally-ill one, and it seems rather obvious that the actual story is that it fed the delusions of an already unwell person. Some real Reefer Madness vibes to this.