Reading List

The most recent articles from a list of feeds I subscribe to.

Meta AI Users Are Inadvertently Sharing Their Private Chats With the World

Amanda Silberling, writing at TechCrunch:

When you ask the AI a question, you have the option of hitting a share button, which then directs you to a screen showing a preview of the post, which you can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world.

When I woke up this morning, I did not expect to hear an audio recording of a man in a Southern accent asking, “Hey, Meta, why do some farts stink more than other farts?”

Flatulence-related inquiries are the least of Meta’s problems. On the Meta AI app, I have seen people ask for help with tax evasion, if their family members would be arrested for their proximity to white-collar crimes, or how to write a character reference letter for an employee facing legal troubles, with that person’s first and last name included. Others, like security expert Rachel Tobac, found examples of people’s home addresses and sensitive court details, among other private information.

Katie Notopoulos, writing at Business Insider (paywalled, alas, but here’s a News+ link):

I found Meta AI’s Discover feed depressing in a particular way — not just because some of the questions themselves were depressing. What seemed particularly dark was that some of these people seemed unaware of what they were sharing.

People’s real Instagram or Facebook handles are attached to their Meta AI posts. I was able to look up some of these people’s real-life profiles, although I felt icky doing so. I reached out to more than 20 people whose posts I’d come across in the feed to ask them about their experience; I heard back from one, who told me that he hadn’t intended to make his chat with the bot public. (He was asking for car repair advice.)

The NYT Goes ‘Reefer Madness’ on ChatGPT

Kashmir Hill, reporting today for The New York Times:

Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

That’s the lede to Hill’s piece, and I don’t think it stands up one iota. Hill presents a lot of evidence that ChatGPT gave Torres answers that fed his paranoia and delusions. There’s zero evidence presented that ChatGPT caused them. But that’s the lede.

At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating ideas that weren’t true but sounded plausible.

“This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.”

Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.

Someone with prescriptions for sleeping pills, anti-anxiety meds, and ketamine doesn’t sound like someone who was completely stable and emotionally sound before encountering ChatGPT. And it’s Torres who brought up the “Am I living in a simulation?” delusion. I’m in no way defending the way that ChatGPT answered his questions about a Matrix-like simulation he suspected he might be living in, or his questions about whether he could fly if he truly believed he could, etc. But the premise of this story is that ChatGPT turned a completely mentally healthy man into a dangerously disturbed mentally ill one, and it seems rather obvious that the actual story is that it fed the delusions of an already unwell person. Some real Reefer Madness vibes to this.

★ Apple’s Spin on the Personalized Siri Apple Intelligence Reset

Most rank and file engineers within Apple do not believe that feature existed in an even vaguely functional state a year ago, and the first any of them ever heard of it was when they watched the keynote with the rest of us on the first day of WWDC last year.

‘The Good, the Bad, and the Weird of Apple’s Newest Platform Updates’

Dan Moren, writing this week at Six Colors:

But you’ve heard about all of that, I’m sure, so we’re not going to rehash it. Instead, let’s get personal: I’m picking out, in my opinion, the best and worst new features of each of Apple’s platforms. To be clear, these are my completely scientific and totally well-reasoned expert opinions on the features that were announced, not just some off-the-cuff reactions less than a day later.

‘Liquid Glasslighting’

MG Siegler:

The underlying message that they’re trying to convey in all these interviews is clear: calm down, this isn’t a big deal, you guys are being a little crazy. And that, in turn, aims to undercut all the reporting about the turmoil within Apple — for years at this point — that has led to the situation with Siri. Sorry, the situation which they’re implying is not a situation. Though, I don’t know, normally when a company shakes up an entire team, that tends to suggest some sort of situation. That, of course, is never mentioned. Nor would you expect Apple — of all companies — to talk openly and candidly about internal challenges. But that just adds to this general wafting smell in the air.

The smell of bullshit.