Reading List

The most recent articles from a list of feeds I subscribe to.

[Sponsor] Drata

Automate compliance. Streamline security. Manage risk. Drata delivers the world’s most advanced Trust Management platform.

DetailsPro

My thanks to DetailsPro for sponsoring last week at DF — including being a sponsor on The Talk Show Live From WWDC 2025. DetailsPro is a designer/developer tool that lets you design with SwiftUI anytime, anywhere — from iPhone, iPad, Vision Pro, and, of course, Mac.

With WWDC 2025’s introduction of Liquid Glass, Apple has introduced the biggest design overhaul since iOS 7. DetailsPro is ready for it, enabling you to prototype new and updated interfaces fast. You can build real SwiftUI layouts directly on your iPhone — no code needed. Export clean SwiftUI code straight to Xcode when you’re ready.

While everyone else is still thinking about how to adapt to the Liquid Glass era, you can already be building. DetailsPro is free to use, with pro features if you need them — via subscription, or a one-time purchase.

★ The Talk Show Live From WWDC 2025

Recorded in front of a live audience at The California Theatre in San Jose Tuesday evening, special guests Joanna Stern and Nilay Patel join me to discuss Apple’s announcements at WWDC 2025.

Meta AI Users Are Inadvertently Sharing Their Private Chats With the World

Amanda Silberling, writing at TechCrunch:

When you ask the AI a question, you have the option of hitting a share button, which then directs you to a screen showing a preview of the post, which you can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world.

When I woke up this morning, I did not expect to hear an audio recording of a man in a Southern accent asking, “Hey, Meta, why do some farts stink more than other farts?”

Flatulence-related inquiries are the least of Meta’s problems. On the Meta AI app, I have seen people ask for help with tax evasion, if their family members would be arrested for their proximity to white-collar crimes, or how to write a character reference letter for an employee facing legal troubles, with that person’s first and last name included. Others, like security expert Rachel Tobac, found examples of people’s home addresses and sensitive court details, among other private information.

Katie Notopoulos, writing at Business Insider (paywalled, alas, but here’s a News+ link):

I found Meta AI’s Discover feed depressing in a particular way — not just because some of the questions themselves were depressing. What seemed particularly dark was that some of these people seemed unaware of what they were sharing.

People’s real Instagram or Facebook handles are attached to their Meta AI posts. I was able to look up some of these people’s real-life profiles, although I felt icky doing so. I reached out to more than 20 people whose posts I’d come across in the feed to ask them about their experience; I heard back from one, who told me that he hadn’t intended to make his chat with the bot public. (He was asking for car repair advice.)

The NYT Goes ‘Reefer Madness’ on ChatGPT

Kashmir Hill, reporting today for The New York Times:

Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

That’s the lede to Hill’s piece, and I don’t think it stands up one iota. Hill presents a lot of evidence that ChatGPT gave Torres answers that fed his paranoia and delusions. There’s zero evidence presented that ChatGPT caused them. But that’s the lede.

At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating ideas that weren’t true but sounded plausible.

“This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.”

Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.

Someone with prescriptions for sleeping pills, anti-anxiety meds, and ketamine doesn’t sound like someone who was completely stable and emotionally sound before encountering ChatGPT. And it’s Torres who brought up the “Am I living in a simulation?” delusion. I’m in no way defending the way that ChatGPT answered his questions about a Matrix-like simulation he suspected he might be living in, or his questions about whether he could fly if he truly believed he could, etc. But the premise of this story is that ChatGPT turned a completely mentally healthy man into a dangerously disturbed mentally ill one, and it seems rather obvious that the actual story is that it fed the delusions of an already unwell person. Some real Reefer Madness vibes to this.