Reading List

The most recent articles from a list of feeds I subscribe to.

Breaking Down Why Apple TVs Are Privacy Advocates’ Go-To Streaming Device

Scharon Harding, writing at Ars Technica:

“Just disconnect your TV from the Internet and use an Apple TV box.”

That’s the common guidance you’ll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we’ve consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.

But how private are Apple TV boxes, really? Apple TVs don’t use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use — could those apps provide information about you to advertisers or Apple?

In this article, we’ll delve into what makes the Apple TV’s privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.

tvOS is perhaps Apple’s least-talked-about platform. (It surely has orders of magnitude more users than VisionOS, but VisionOS gets talked about because it’s so audacious.) But it might be their platform that’s the furthest ahead of its competition. Not because tvOS is insanely great, but it’s at least pretty good, and every other streaming TV platform seems to be in a race to make real the future TV interface from Idiocracy. It’s not just that they’re bad interfaces with deplorable privacy, it’s that they’re outright against the user.

Apple Researchers Publish Paper on the Limits of Reasoning Models (Showing That They’re Not Really ‘Reasoning’ at All)

Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar, from Apple’s Machine Learning Research team:

Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. [...] Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities.

The full paper is quite readable, but today was my travel day and I haven’t had time to dig in. And it’s a PDF so I couldn’t read it on my phone. (Coincidence or not that this dropped on the eve of WWDC?)

My basic understanding after a skim is that the paper shows, or at least strongly suggests, that LRMs don’t “reason” at all. They just use vastly more complex pattern-matching than LLMs. The result is that LRMs effectively overthink on simple problems, outperform LLMs on mid-complexity puzzles, and fail in the same exact way LLMs do on high-complexity tasks and puzzles.

★ Gurman Says New UI Is Named ‘Liquid Glass’ (and Makes a Terrible Analogy Regarding Apple’s Risk With Falling Behind on AI)

If it takes Apple as long to have its own competitive LLMs as it did to have its own competitive web browser, I suspect they’ll soon be paying to use the LLMs that are owned and controlled by others, not charging the others for the privilege of reaching Apple’s platform users.

Bill Atkinson Dies From Cancer at 74

From his family, on Atkinson’s Facebook page:

We regret to write that our beloved husband, father, and stepfather Bill Atkinson passed away on the night of Thursday, June 5th, 2025, due to pancreatic cancer. He was at home in Portola Valley in his bed, surrounded by family. We will miss him greatly, and he will be missed by many of you, too. He was a remarkable person, and the world will be forever different because he lived in it. He was fascinated by consciousness, and as he has passed on to a different level of consciousness, we wish him a journey as meaningful as the one it has been to have him in our lives. He is survived by his wife, two daughters, stepson, stepdaughter, two brothers, four sisters, and dog, Poppy.

One of the great heroes in not just Apple history, but computer history. If you want to cheer yourself up, go to Andy Hertzfeld’s Folklore.org site and (re-)read all the entries about Atkinson. Here’s just one, with Steve Jobs inspiring Atkinson to invent the roundrect. Here’s another (surely near and dear to my friend Brent Simmons’s heart) with this kicker of a closing line: “I’m not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied.”

Some of his code and algorithms are among the most efficient and elegant ever devised. The original Macintosh team was chock full of geniuses, but Atkinson might have been the most essential to making the impossible possible under the extraordinary technical limitations of that hardware. Atkinson’s genius dithering algorithm was my inspiration for the name of Dithering, my podcast with Ben Thompson. I find that effect beautiful and love that it continues to prove useful, like on the Playdate and apps like BitCam.

In addition to his low-level contributions like QuickDraw, Atkinson was also the creator of MacPaint (which to this day stands as the model for bitmap image editors — Photoshop, I would argue, was conceptually derived directly from MacPaint) and HyperCard (“inspired by a mind-expanding LSD journey in 1985”), the influence of which cannot be overstated.

I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he’s on the short list. What a man, what a mind, what gifts to the world he left us.

Swift 6 Productivity in the Sudden Age of LLM-Assisted Programming

Kyle Hughes, in a brief thread on Mastodon last week:

At work I’m developing a new iOS app on a small team alongside a small Android team doing the same. We are getting lapped to an unfathomable degree because of how productive they are with Kotlin, Compose, and Cursor. They are able to support all the way back to Android 10 (2019) with the latest features; we are targeting iOS 16 (2022) and have to make huge sacrifices (e.g Observable, parameter packs in generics on types). Swift 6 makes a mockery of LLMs. It is almost untenable.

This wasn’t the case in the 2010s. The quality and speed of implementation of every iOS app I have ever worked on, in teams of every size, absolutely cooked Android. [...] There has never been a worse time in the history of computers to launch, and require, fundamental and sweeping changes to languages and frameworks.

The problem isn’t necessarily inherent to the design of the Swift language, but that throughout Swift’s evolution Apple has introduced sweeping changes with each major new version. (Secondarily, that compared to other languages, a lower percentage of Swift code that’s written is open source, and thus available to LLMs for use in training corpuses.) Swift was introduced at WWDC 2014 (that one again) and last year Apple introduced Swift 6. That’s a lot of major version changes for a programming language in one decade. There were pros and cons to Apple’s approach over the last decade. But now there’s a new, and major con: because Swift 6 only debuted last year, there’s no great corpus of Swift 6 code for LLMs to have trained on, and so they’re just not as good — from what I gather, not nearly as good — at generating Swift 6 code as they are at generating code in other languages, and for other programming frameworks like React.

The new features in Swift 6 are for the better, but, in a group chat, my friend Daniel Jalkut described them to me as, “I think Swift 6 changed very little, but the little it changed has huge sweeping implications. Akin to the switch from MRR to ARC.” That’s a reference to the change in Objective-C memory management from manual retain/release (MRR) to automatic reference counting (ARC) back in 2011. Once ARC came out, no one wanted to be writing new code using manual retain/release (which was both tedious and a common source of memory-leak bugs). But if LLMs had been around in 2011/2012, they’d only have been able to generate MRR Objective-C code because that’s what all the existing code they’d been trained on used.

I’m quite certain everyone at Apple who ought to be concerned about this is concerned about it. The question is, do they have solutions ready to be announced next week? This whole area — language, frameworks, and tooling in the LLM era — is top of mind for me heading into WWDC next week.