On January 6th, 1995 two bank robbers in Pittsburgh confused law enforcement by not making any attempts to conceal their faces but instead brazenly looking at security cameras as if they were invisible. The reason is that they actually thought they were. Clifton Earl Johnson had convinced his fellow in crime, McArthur Wheeler that covering […]
One of my favorite product design principles is Alan Kay’s “Simple things should be simple, complex things should be possible”.
[1]
I had been saying it almost verbatim long before I encountered Kay’s quote.
Kay’s maxim is deceptively simple, but its implications run deep.
It isn’t just a design ideal — it’s a call to continually balance friction, scope, and tradeoffs in service of the people using our products.
This philosophy played a big part in Prism’s success back in 2012,
helping it become the web’s de facto syntax highlighter for years, with over 2 billion npm downloads.
Simple things were easy: All it took to highlight code on a webpage was including two files, a JS file and a CSS file.
No markup changes.
No JS glue code.
Styling used readable CSS class names.
Even adding new languages — the most common “complex” use case — required far less knowledge and effort than alternatives.
At the same time, highly complex things were possible: Prism exposed a deep extensibility model so plugin authors could patch internals and dramatically alter behavior.
These choices were not free.
The friendly styling API increased clash risk, and deep extensibility reduced encapsulation.
These were conscious, hard, tradeoffs.
Since Alan Kay was a computer scientist, his quote is typically framed as a PL or API design principle,
but that sells it short.
It applies to a much, much broader class of interfaces.
This distinction hinges on the distribution of use cases.
Products often cut scope by identifying the ~20% of use cases that drive ~80% of usage — aka the Pareto Principle.
Some products, however, have such diverse use cases that Pareto doesn’t meaningfully apply to the product as a whole.
There are common use cases and niche use cases, but no clean 20-80 split.
The tail of niche use cases is so long, it becomes significant in aggregate.
For lack of a better term, I’ll call these long‑tail UIs.
Nearly all creative tools are long-tail UIs.
That’s why it works so well for programming languages and APIs — both are types of creative interfaces.
But so are graphics editors, word processors, spreadsheets, and countless other interfaces that help humans create artifacts — even some you would never describe as creative.
Example: Google Calendar
You wouldn’t describe Google Calendar as a creative tool, but it is a tool that helps humans create artifacts (calendar events).
It is also a long-tail product:
there is a set of common, conceptually simple cases (one-off events at a specific time and date),
and a long tail of complex use cases (recurring events, guests, multiple calendars, timezones, etc.).
Indeed, Kay’s maxim has clearly been used in its design.
The simple case has been so optimized that you can literally add a one hour calendar event with a single click (using a placeholder title).
A different duration can be set after that first click through dragging
[2].
But almost every edge case is also catered to — with additional user effort.
Google Calendar is squarely a long-tail UI.
The Pareto Principle is still useful for individual features, as they tend to be more narrowly defined.
E.g. there is a set of spreadsheet formulas (actually much smaller than 20%) that drives >80% of formula usage.
While creative tools are the poster child of long-tail UIs,
there are long-tail components in many transactional interfaces such as e-commerce or meal delivery (e.g. result filtering & sorting, product personalization interfaces, etc.).
Filtering UIs are another big category of long-tail UIs, and they involve so many tradeoffs and tough design decisions you could literally write a book about just them.
Airbnb’s filtering UI here is definitely making an effort to make simple things easy with (personalized! 😍) shortcuts and complex things possible via more granular controls.
Picture a plane with two axes: the horizontal axis being the complexity of the desired task (from the user’s perspective),
and the vertical axis the cognitive and/or physical effort users need to expend to accomplish their task using a given interface.
Following Kay’s maxim guarantees these two points:
Simple things being easy guarantees a point on the lower left (low use case complexity → low user effort).
Complex things being possible guarantees a point somewhere on the far right.
The lower down, the better — but higher up is acceptable.
Alan Kay's maxim visualized.
But even if we get these two points — what about all the points in between?
There are infinite different ways to connect them, and they produce vastly different overall user experiences.
How does your interface fare when their use case gets slightly more complex?
Are users yeeted into the deep end of interface complexity (bad), or do they only need to invest a proportional, incremental amount of additional effort to achieve their goal (good)?
Meet the complexity-to-effort curve, the most important usability metric you’ve never heard of.
For delightful user experiences, making simple things easy and complex things possible is not enough — the transition between the two should also be smooth.
You see, simple use cases are the spherical cows in space of product design.
They work great for prototypes to convince stakeholders, or in marketing demos, but the real world is messy.
Most artifacts that users need to create to achieve their real-life goals rarely fit into your “simple” flows completely, no matter how well you’ve done your homework.
They are mostly simple — with a liiiiitle wart here and there.
For a long-tail interface to serve user needs well in practice,
we need to consciously design the curve, not just its endpoints.
A model with surprising predictive power is to treat user effort as a currency that users are spending to buy solutions to their problems.
Nobody likes paying it;
in an ideal world software would read our mind and execute perfectly with zero user effort.
Since we don’t live in such a world, users understand to pay a bit of effort to achieve their goals,
and are generally willing to pay more when they feel their use case warrants it.
Just like regular pricing, actual user experience often depends more on the relationship between cost and budget than on the absolute cost itself.
If you pay more than you expected, you feel ripped off.
You may still pay it because you need the product in the moment, but you’ll be looking for a better deal in the future.
And if you pay less than you had budgeted, you feel like you got a bargain, with all the delight and loyalty that entails.
Suppose you’re ordering pizza. You want a simple cheese pizza with ham and mushrooms.
You use the online ordering system, and you notice that adding ham to your pizza triples its price.
We’re not talking some kind of fancy ham where the pigs were fed on caviar and bathed in champagne, just a regular run-of-the-mill pizza topping.
You may still order it if you’re really craving ham on your pizza and no other options are available, but how does it make you feel?
It’s not that different when the currency is user effort.
The all too familiar “But I just wanted to _________, why is it so hard?”.
When a small increase in use case complexity results in a disproportionately large increase in user effort cost, we have a usability cliff.
Usability cliffs make users feel resentful, just like the customers of our fictitious pizza shop.
A usability cliff is when a small increase in use case complexity requires a large increase in user effort.
Usability cliffs are very common in products that make simple things easy and complex things possible through entirely separate flows with no integration between them:
a super high level one that caters to the most common use case with little or no flexibility,
and a very low-level one that is an escape hatch: it lets users do whatever,
but they have to recreate the solution to the simple use case from scratch before they can tweak it.
Example: The HTML video element
Simple things are certainly easy: all we need to get a video with a nice sleek set of controls that work well on every device is a single attribute: controls.
We just slap it on our <video> element and we’re done with a single line of HTML:
<video src="videos/cat.mp4" controls></video>
➡️
Now suppose use case complexity increases just a little.
Maybe I want to add buttons to jump 10 seconds back or forwards.
Or a language picker for subtitles.
Or just to hide the volume control on a video that has no audio track.
None of these are particularly niche, but the default controls are all-or-nothing: the only way to change them is to reimplement the whole toolbar from scratch, which takes hundreds of lines of code to do well.
Simple things are easy and complex things are possible.
But once use case complexity crosses a certain (low) threshold, user effort abruptly shoots up.
That’s a usability cliff.
Example: Instagram editor
For Instagram’s photo editor, the simple use case is canned filters, whereas the complex ones are those requiring tweaking through individual low-level controls.
However, they are implemented as separate flows: you can tweak the filter’s intensity, but you can’t see or adjust the primitives it’s built from.
You can layer both types of edits on the same image, but they are additive, which doesn’t work well.
Ideally, the two panels would be integrated, so that selecting a filter would adjust the low-level controls accordingly, which would both facilitate incremental tweaking
and serve as a teaching aid for how filters work.
Example: Filtering in Coda
My favorite end-user facing product that gets this right is Coda,
a cross between a document editor, a spreadsheet, and a database.
All over its UI, it supports entering formulas instead of raw values, which makes complex things possible.
To make simple things easy, it also provides the GUI you’d expect even without a formula language.
But here’s the twist: these presets generate formulas behind the scenes that users can tweak!
Whenever users need to go a little beyond what the UI provides, they can switch to the formula editor and adjust what was generated
— far easier than writing it from scratch.
Another nice touch: “And” is not just communicating how multiple filters are combined, but is also a control that lets users edit the logic.
Defining high-level abstractions in terms of low-level primitives is a great way to achieve a smooth complexity-to-effort curve,
as it allows you to expose tweaking at various intermediate levels and scopes.
The downside is that it can sometimes constrain the types of high-level solutions that can be implemented.
Whether the tradeoff is worth it depends on the product and use cases.
If you like eating out, this may be a familiar scenario:
— I would like the rib-eye please, medium-rare.
— Thank you sir/ma’am. How would you like your steak cooked?
Annoying, right?
And yet, this is how many user interfaces work; expecting users to communicate the same intent multiple times in slightly different ways.
If incremental value should require incremental user effort, an obvious corollary is that things that produce no value should not require user effort.
Using the currency model makes this obvious: who likes paying without getting anything in return?
Respect user effort.
Treat it as a scarce resource — just like regular currency — and keep it close to the minimum necessary to declare intent.
Do not require users to do work that confers them no benefit, and could have been handled by the UI.
If it can be derived from other input, it should be derived from other input.
A once ubiquitous example that is thankfully going away, is the credit card form which asks for the type of credit card in a separate dropdown.
Credit card numbers are designed so that the type of credit card can be determined from the first four digits.
There is zero reason to ask for it separately.
Beyond wasting user effort, duplicating input that can be derived introduces an unnecessary error condition that you now need to handle:
what happens when the entered type is not consistent with the entered number?
User actions that meaningfully communicate intent to the interface are signal.
Any other step users need to take to accomplish their goal, is noise.
This includes communicating the same input more than once,
providing input separately that could be derived from other input with complete or high certainty,
transforming input from their mental model to the interface’s mental model,
and any other demand for user effort that does not serve to communicate new information about the user’s goal.
Some noise is unavoidable.
The only way to have 100% signal-to-noise ratio would be if the interface could mind read.
But too much noise increases friction and obfuscates signal.
Example: Programmatic Element removal
The two web platform methods to programmatically remove an element from the page provide a short yet demonstrative example of this for APIs.
To signal intent in this case, the user needs to communicate two things:
(a) what they want to do (remove an element), and (b) which element to remove.
Anything beyond that is noise.
The modern element.remove() DOM method has an extremely high signal-to-noise ratio.
It’s hard to imagine a more concise way to signal intent.
It replaced the older parent.removeChild(child) method, which had much worse ergonomics.
The older method was framed around removing a child, so it required two parameters: the element to remove, and its parent.
But the parent is not a separate source of truth — it would always be the child node’s parent!
As a result, its actual usage involved boilerplate, where
developers had to write a much noisier if (element.parentNode) element.parentNode.removeChild(element)[3].
The difference in signal-to-noise ratio is staggering: 81% vs 20% in this case.
Of course, it was usually encapsulated in utility functions, which provided a similar signal-to-noise ratio as the modern method.
However, user-defined abstractions don’t come for free, there is an effort (and learnability) tax there, too.
Boilerplate is repetitive code that users need to include without thought, because it does not actually communicate intent.
It’s the software version of red tape: hoops you need to jump through to accomplish your goal, that serve no obvious purpose in furthering said goal except for the fact that they are required of you.
In case of parent.removeChild() above, the amount of boilerplate may seem small, but when viewed as a percentage of the total amount of code, the difference is staggering.
The exact ratio (81% vs 20% here) varies based on the specifics of the API (variable names, method wording, etc.),
but when the difference is meaningful, it transcends these types of low-level details.
Think of it like big-O notation for API design.
Improving signal-to-noise ratio is also why the front-end web industry gravitated towards component architectures over copy-pasta snippets:
components increase signal-to-noise ratio by encapsulating boilerplate and exposing a much higher signal UI.
They are the utility functions of user interfaces.
As an exercise for the reader, try to calculate the signal-to-noise ratio of a Bootstrap accordion (or any other complex component in any UI library that expects you to copy-paste snippets).
Instead of syntax, visual interfaces have micro-interactions.
There are various models to quantify the user effort cost of micro-interactions, such as KLM.
When pointing out friction issues in design reviews,
I have sometimes heard “users have not complained about this”.
This reveals a fundamental misunderstanding about the psychology of user feedback.
Users are much more vocal about things not being possible, than about things being hard.
The reason becomes clear if we look at the neuroscience of each.
Friction is transient in working memory; after completing the task, details fade from the user’s prefrontal cortex.
However, the negative emotions persist in the limbic system and build up over time.
Filing a complaint requires prefrontal engagement, which for friction is brief or absent.
Users often can’t even articulate why the software feels unpleasant: the specifics vanish; the feeling remains.
Hard limitations, on the other hand, persist as conscious appraisals.
The trigger doesn’t go away, since there is no workaround, so it’s far more likely to surface in explicit user feedback.
Both types of pain points cause negative emotions,
but friction is primarily processed by the limbic system (emotion),
whereas hard limitations remain in the prefrontal cortex (reasoning).
This also means that when users finally do reach the breaking point and complain about friction, you better listen.
Second, user complaints are filed when there is a mismatch in expectations.
Things are not possible but the user feels they should be, or interactions cost more user effort than the user had budgeted,
e.g. because they know that a competing product offers the same feature for less (work).
Often, users have been conditioned to expect poor user experiences,
either because all options in the category are high friction, or because the user is too novice to know better
[4].
So they begrudgingly pay the price, and don’t think they have the right to complain, because it’s just how things are.
You might ask, “If all competitors are equally high-friction, how does this hurt us?”
An unmet need is a standing invitation to disruption that a competitor can exploit at any time.
Because you’re not only competing within a category; you’re competing with all alternatives — including nonconsumption (see Jobs‑to‑be‑Done).
Even for retention, users can defect to a different category altogether (e.g., building native apps instead of web apps).
Historical examples abound.
When it comes to actual currency, a familiar example is Airbnb: Until it came along, nobody would complain that a hotel of average price is expensive — it was just the price of hotels.
If you couldn’t afford it, you just couldn’t afford to travel, period.
But once Airbnb showed there is a cheaper alternative for hotel prices as a whole, tons of people jumped ship.
It’s no different when the currency is user effort.
Stripe took the payment API market by storm when it demonstrated that payment APIs did not have to be so high friction.
iPhone disrupted the smartphone market when it demonstrated that no, you did not have to be highly technical to use a smartphone.
The list goes on.
Unfortunately, friction is hard to instrument.
With good telemetry you can detect specific issues (e.g., dead clicks), but there is no KPI to measure friction as a whole.
And no, NPS isn’t it — and you’re probably using it wrong anyway.
Instead, the emotional residue from friction quietly drags many metrics down (churn, conversion, task completion), sending teams in circles like blind men touching an elephant.
That’s why dashboards must be paired with product vision and proactive, first‑principles product leadership.
Steve Jobs exemplified this posture: proactively, aggressively eliminating friction presented as “inevitable.”
He challenged unnecessary choices, delays, and jargon, without waiting for KPIs to grant permission.
Do mice really need multiple buttons? Does installing software really need multiple steps? Do smartphones really need a stylus?
Of course, this worked because he had the authority to protect the vision; most orgs need explicit trust to avoid diluting it.
So, if there is no metric for friction, how do you identify it?
Usability testing lets you actually observe firsthand what things are hard instead of having them filtered through users’ memories and expectations.
Design reviews/audits by usability experts is complementary to usability testing, as it often uncovers different issues. Design reviews are also great for maximizing the effectiveness of usability testing by getting the low-hanging fruit issues out of the way before it.
Dogfooding is unparalleled as a discovery tool — nothing else will identify as many issues as using the product yourself, for your own, real needs.
However, it’s important to keep in mind that you’re a huge power user of your own product.
You cannot surface learnability issues (curse of knowledge) and you will surface issues no-one else has.
Dogfooding is a fantastic discovery tool, but you still need user research to actually evaluate and prioritize the issues it surfaces.
Reducing friction rarely comes for free, just because someone had a good idea.
These cases do exist, and they are great, but it usually takes sacrifices.
And without it being an organizational priority, it’s very hard to steer these tradeoffs in that direction.
The most common tradeoff is implementation complexity.
Simplifying user experience is usually a process of driving complexity inwards and encapsulating it in the implementation.
Explicit, low-level interfaces are far easier to implement, which is why there are so many of them.
Especially as deadlines loom, engineers will often push towards externalizing complexity into the user interface, so that they can ship faster.
And if Product leans more data-driven than data-informed, it’s easy to look at customer feedback and conclude that what users need is more features
(it’s not).
Simple to use is often at odds with simple to implement.
The first faucet is a thin abstraction: it exposes the underlying implementation directly, passing the complexity on to users, who now need to do their own translation of temperature and pressure into amounts of hot and cold water.
It prioritizes implementation simplicity at the expense of wasting user effort.
The second design prioritizes user needs and abstracts the underlying implementation to support the user’s mental model.
It provides controls to adjust the water temperature and pressure independently, and internally translates them to the amounts of hot and cold water.
This interface sacrifices some implementation simplicity to minimize user effort.
This is why I’m skeptical of blanket calls for “simplicity.”: they are platitudes.
Everyone agrees that, all else equal, simpler is better.
It’s the tradeoffs between different types of simplicity that are tough.
In some cases, reducing friction even carries tangible financial risks, which makes leadership buy-in crucial.
This kind of tradeoff cannot be made by individual designers — only when eliminating friction is an organizational priority.
The Oslo airport train ticket machine is the epitome of a high signal-to-noise interface.
You simply swipe your credit card to enter and you swipe your card again as you leave the station at your destination.
That’s it. No choices to make. No buttons to press. No ticket.
You just swipe your card and you get on the train.
Today this may not seem radical, but back in 2003, it was groundbreaking.
To be able to provide such a frictionless user experience, they had to make a financial tradeoff:
it does not ask for a PIN code, which means the company would need to absorb the financial losses from fraudulent charges (stolen credit cards, etc.).
When user needs are prioritized at the top, it helps to cement that priority as an organizational design principle to point to when these tradeoffs come along in the day-to-day.
Having a design principle in place will not instantly resolve all conflict, but it helps turn conflict about priorities
into conflict about whether an exception is warranted, or whether the principle is applied correctly, both of which are generally easier to resolve.
Of course, for that to work everyone needs to be on board with the principle.
But here’s the thing with design principles (and most principles in general): they often seem obvious in the abstract, so it’s easy to get alignment in the abstract.
It’s when the abstract becomes concrete that it gets tough.
“User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.”
This highlights another key distinction: the hierarchy of user needs is more nuanced than just users over developers.
While users over developers is a good starting point, it is not sufficient to fully describe the hierarchy of user needs for many products.
A more flexible framing is consumers over producers;
developers are just one type of producer.
Consumers are typically more numerous than producers, so this minimizes collective pain.
Producers are typically more advanced, and can handle more complexity than consumers. I’ve heard this principle worded as “Put the pain on those who can bear it”, which emphasizes this aspect.
Producers are typically more invested, and less likely to leave
The web platform has multiple tiers of producers:
Specification writers are at the bottom of the hierarchy, and thus, can handle the most pain (ow! 🥴)
Browser developers (“user agent implementors” in the principle) are consumers when it comes to specifications, but producers when it comes to the web platform
Web developers are consumers when it comes to the web platform, but producers when it comes to their own websites
Even within the same tier there are often producer vs consumer dynamics.
E.g. when it comes to web development libraries, the web developers who write them are producers and the web developers who use them are consumers.
This distinction also comes up in extensible software, where plugin authors are still consumers when it comes to the software itself,
but producers when it comes to their own plugins.
It also comes up in dual sided marketplace products
(e.g. Airbnb, Uber, etc.),
where buyer needs are generally higher priority than seller needs.
In the economy of user effort, the antithesis of overpriced interfaces that make users feel ripped off
are those where every bit of user effort required feels meaningful and produces tangible value to them.
The interface is on the user’s side, gently helping them along with every step, instead of treating their time and energy as disposable.
The user feels like they’re getting a bargain: they get to spend less than they had budgeted for!
And we all know how motivating a good bargain is.
User effort bargains don’t have to be radical innovations;
don’t underestimate the power of small touches.
A zip code input that auto-fills city and state,
a web component that automatically adapts to its context without additional configuration,
a pasted link that automatically defaults to the website title (or the selected text, if any),
a freeform date that is correctly parsed into structured data,
a login UI that remembers whether you have an account and which service you’ve used to log in before,
an authentication flow that takes you back to the page you were on before.
Sometimes many small things can collectively make a big difference.
In some ways, it’s the polar opposite of death by a thousand paper cuts:
Life by a thousand sprinkles of delight! 😀
In the end, “simple things simple, complex things possible” is table stakes.
The key differentiator is the shape of the curve between those points.
Products win when user effort scales smoothly with use case complexity, cliffs are engineered out, and every interaction declares a meaningful piece of user intent.
That doesn’t just happen by itself.
It involves hard tradeoffs, saying no a lot, and prioritizing user needs at the organizational level.
Treating user effort like real money, forces you to design with restraint.
A rule of thumb is place the pain where it’s best absorbed by prioritizing consumers over producers.
Do this consistently, and the interface feels delightful in a way that sticks.
Delight turns into trust.
Trust into loyalty.
Loyalty into product-market fit.
Yes, typing can be faster than dragging, but minimizing homing between input devices improves efficiency more, see KLM↩︎
Yes, today it would have been element.parentNode?.removeChild(element), which is a little less noisy, but this was before the optional chaining operator. ↩︎
When I was running user studies at MIT, I’ve often had users exclaim “I can’t believe it! I tried to do the obvious simple thing and it actually worked!” ↩︎
Around this time last year I came to Uppsala with a friend to check out Rosendal Day - a day organized by the neighborhood I had just bought an apartment in. On the walk to Rosendal from the station I realized it was also the day of the 2024 Uppsala (full and half) Marathon, the runners and spectators gathered on the route through the middle of the city.
Litestream is an open-source tool that backs up SQLite databases to cloud storage in real time. I love it and use it in all of my projects.
Litestream is owned by Fly.io, and they paused development on Litestream for almost two years in favor of an alternative project called LiteFS. Two weeks ago, Ben Johnson, Litestream’s creator and lead developer, announced that they were shifting focus back to Litestream and had just published a new release, 0.5.0.
To better serve autocrats, it will talk out both sides of its mouth in ways it had previously reserved for dissembling arguments against threats to profits, like right-to-repair and browser choice.
They are, of course, linked.
Apple bent the knee for months, leaving many commentators to ask why. But the reasons are not mysterious: Apple wants things that only the government can provide, things that will defend and extend its power to extract rents, rather than innovate. Namely, selective exemption from tariffs and an end to the spectre of pro-competition regulation that might bring about real browser choice.
The DMA holds the power to unlock true, safe, interoperability via the web. Its core terms require that Apple facilitate real browser engine choice, and Apple is all but refusing, playing games to prevent powerful and safe iOS browsers and the powerful web applications they facilitate. Web applications that can challenge the App Store.
Unlike tariffs, which present a threat to short-term profits through higher costs and suppression of upgrades in the near term, interoperability is a larger and more insidious boogeyman for Apple. It could change everything.
Apple's profits are less and less attributable to innovation as “services” revenue swells Cupertino's coffers out of all proportion to iPhone sales volume. “Services” is code for rent extraction from captive users and developers. If they could acquire and make safe apps outside the App Store, Apple wouldn't be able to take 30% from an outlandishly large fraction of the digital ecosystem's wealthiest players.
Apple understands browser choice is a threat to its rentier model. The DMA holds the potential for developers to finally access the safe, open, and interoperable web technologies that power most desktop computing today. This is a particular threat to Apple, because its class-leading hardware is perfectly suited to running web applications. All that's missing are browsers that aren't intentionally hobbled. This helps to explain why Apple simultaneously demands control over all browser technology on iOS while delaying important APIs, breaking foundational capabilities, and gaslighting developers about Apple's unwillingness to solve pressing problems.
Keeping capable, stable, high-quality browsers away from iOS is necessary to maintain the App Store's monopoly on the features every app needs. Keeping other software distribution mechanisms from offering those features at a lower price is a hard requirement for Cupertino's extractive business model. The web (in particular, PWAs) present a worst-case scenario.
Unlike alternative app stores that let developers decouple distribution of proprietary apps from Apple's App Store, PWAs further free developers from building for each OS separately, allowing them to deliver apps though a zero-cost platform that builds on standards. And that platform doesn't feature a single choke point. For small developers, this is transformative, and it's why late-stage Apple cannot abide laws that create commercial fairness and enable safe, secure, pro-user alternatives.
This is what Apple is mortgaging its brand (or, if you prefer, soul) to prevent: a world where users have a real choice in browsers.
Horrors.
Apple is loaning its monopoly on iOS software to yet another authoritarian regime without a fight, painting a stark contrast: when profits are on the line, Cupertino will gaslight democratic regulators and defy pro-competition laws with all the $1600/hr lawyers Harvard can graduate. And when it needs a transactional authoritarian's help to protect those same profits, temporarily2 lending its godlike power over iOS to censor clearly protected speech isn't too high a price to ask. Struggle for thee, but not for me.
The kicker is that the only alternative for affected users and developers is Apple's decrepit implementation of web apps; the same platform Cupertino serially knee-caps to deflect competition with its proprietary APIs.
It is no exaggeration to say the tech press is letting democracy down by failing to connect the dots. Why is Apple capitulating? Because Apple wants things from the government. What are those things? We should be deep into that debate, but our reportage and editorial classes cannot grasp that A precedes B. The obvious answers are also the right ones: selective protection from tariffs, defanged prosecution by the DOJ, and an umbrella from the EU's democratic, pro-competition regulation.
I used to believe that Apple were unequivocally ‘the good guys,’” Hodges writes. “I passionately advocated for people to understand Apple as being on the side of its users above all else. I now feel like I must question that.”
The tech press is failing to grasp the moral stakes of API access. Again and again they flunk at connecting boring questions of who can write and distribute programs for phones to urgent issues of power over publication and control of devices. By declining to join these threads, they allow the unearned and increasingly indefensible power of mobile OS vendors to proliferate. The urgent question is how that power can be attenuated, or as Popper put it:
We must ask whether…we should not prepare for the worst leaders, and hope for the best. But this leads to a new approach to the problem of politics, for it forces us to replace the question: "Who should rule?" by the new question: "How can we so organize political institutions that bad or incompetent rulers can be prevented from doing too much damage?"
Instead of questioning why Apple's OS is so fundamentally insecure that an App Store is necessary, they accept the ever-false idea that iOS has been relatively secure because of the App Store.
Instead of confronting Apple with the reality that it used the App Store to hand out privacy-invading APIs in undisciplined ways to unscrupulous actors, it congratulates Cupertino on the next episode of our nightly kayfabe. The links between Apple's monopoly on sensitive APIs and the growth of monopolies in adjacent sectors are rarely, if ever, questioned. Far too often, the tech press accepts the narrative structure of Apple's marketing, satisfying pangs of journalistic conscience with largely ineffectual discussions about specific features that will not upset the power balance.
Nowhere, e.g., in The Verge's coverage of these letters is there a discussion about alternatives to the App Store. Only a few outlets ever press Apple on its suppression of web apps, including failure to add PWA install banners and essential capabilities. It's not an Apple vs. Google horse-race story, and so discussion of power distribution doesn't get coverage.
Settling for occasionally embarrassing Apple into partially reversing its most visibly egregious actions is ethically and morally stunted. Accepting the frame of "who should rule?" that Cupertino reflexively deploys is toxic to any hope of worthwhile technology because it creates and celebrates the idea of kings, leaving us supine relative to the mega-corps in our phones.
This is, in a word, childish.
Adults understand that things are complicated, that even the best intentioned folks get things wrong, or can go astray in larger ways. We build institutions and technologies to protect ourselves and those we love from the worst impacts of those events, and those institutions always model struggles over power and authority. If we are lucky and skilled enough to build them well, the results are balanced systems that attenuate attempts at imposing overt authoritarianism.
In other words, the exact opposite of Apple's infantilising and totalitarian world view.
Instead of debating which wealthy vassals might be more virtuous than the current rulers, we should instead focus on attenuating the power of these monarchical, centralising actors. The DMA is doing this, creating the conditions for interoperability, and through interoperability, competition. Apple know it, and that's why they're willing to pawn their own dignity, along with the rights of fellow Americans, to snuff out the threat.
These are not minor points. Apple has power, and that power comes from its effective monopoly on the APIs that make applications possible on the most important computing platform of our adult lives.
Protecting this power has become an end unto itself, curdling the pro-social narratives Apple takes pains to identify itself with. Any reporter that bothers to do what a scrappy band of web developers have done — to actually read the self-contradictory tosh Apple flings at regulators and legislators around the world — would have been able to pattern match; to see that twisting words to defend the indefensible isn't somehow alien to Apple. It's not even unusual.
But The Verge, 404, and even Wired are declining to connect the dots. If our luminaries can't or won't dig in, what hope do less thoughtful publications with wider audiences have?
Apple's power and profits have made it an enemy of democracy and civic rights at home and abroad. A mealy-mouthed tech press that cannot see or say the obvious is worse than useless; it is an ally in Apple's attempts to obfuscate.
The most important story about smartphones for at least the past decade has been Cupertino's suppression of the web, because that is a true threat to the App Store, and Apple's power flows from the monopolies it braids together. As Cory Doctorow observed:
Apple's story – the story of all centralized, authoritarian technology – is that you have to trade freedom for security. If you want technology that Just Works(TM), you need to give up on the idea of being able to override the manufacturer's decisions. It's always prix-fixe, never a la carte.
This is a kind of vulgar Thatcherism, a high-tech version of her maxim that "there is no alternative." Decomposing the iPhone into its constituent parts – thoughtful, well-tested technology; total control by a single vendor – is posed as a logical impossibility, like a demand for water that's not wet
Doctorow's piece on these outrages is a must-read, as it does what so many in the tech press fail to attempt: connecting patterns of behaviour over time and geography to make sense of Apple's capitulation. It also burrows into the rot at the heart of the App Store: the claim that anybody should have as much power as Apple has arrogated to itself.
We can see clearly now that this micro-authoritarian structure is easily swayed by macro-authoritarians, and bends easily to those demands. As James C. Scott wrote:
I believe that many of the most tragic episodes of state development in the late nineteenth and twentieth centuries originate in a particularly pernicious combination of three elements. The first is the aspiration to the administrative ordering of nature and society, an aspiration that we have already seen at work in scientific forestry, but one raised to a far more comprehensive and ambitious level. “High modernism” seems an appropriate term for this aspiration. As a faith, it was shared by many across a wide spectrum of political ideologies. Its main carriers and exponents were the avant-garde among engineers, planners, technocrats, high-level administrators, architects, scientists, and visionaries.
If one were to imagine a pantheon or Hall of Fame of high-modernist figures, it would almost certainly include such names as Henri Comte de Saint-Simon, Le Corbusier, Walther Rathenau, Robert McNamara, Robert Moses, Jean Monnet, the Shah of Iran, David Lilienthal, Vladimir I. Lenin, Leon Trotsky, and Julius Nyerere. They envisioned a sweeping, rational engineering of all aspects of social life in order to improve the human condition.
— James C. Scott, "Seeing Like A State"
This is also Apple's vision for the iPhone; an unshakeable belief in its own rightness and transformative power for good. Never mind all the folks that get hurt along the way, it is good because Apple does it. There is no claim more central to the mythos of Apple's marketing wing, and no deception more empowering to abusers of power.4
Apple claims to stand for open societies, but POSIWID shows that to be a lie. It is not just corrupted, but itself has become corrupting; a corrosive influence on the day-to-day exercise of rights necessary for democracy and the rule-of-law to thrive.5
Apple's Le Corbusierian addiction to control has not pushed it into an alliance with those resisting oppression, but into open revolt against efforts that would make the iPhone an asset for citizens exercising their legitimate rights to aid the powerless. It scuttles and undermines open technologies that would aid dissidents. It bends the knee to tyranny because unchecked power helps Cupertino stave off competition, preserving (it thinks) a space for its own messianic vision of technology to lift others out of perdition.
If the consequences were not so dire, it would be tragically funny.
I spent a dozen and change years at Google, and my greatest disappointment in leadership over those years was the way the founders coddled the Android team's similarly authoritarian vision.
For the price of a prominent search box on every phone,6 the senior leadership (including Sundar) were willing to sow the seeds of the web's obsolescence, handing untold power to Andy Rubin's team of Java zealots. It was no secret that they sought to displace the web as the primary way for users to experience computing, substituting proprietary APIs for open platforms along the way.
With the growth of Android, Play grew in influence, in part as cover for Android's original sins.7 This led to a series of subtler, but no less effective, anti-web tactics that dovetailed with Apple's suppression of web apps on iOS. The back doors and exotic hoops developers must jump through to gain distribution for interoperable apps remains a scandal.
But more than talking about Google and what it has done, we should talk about how we talk about Google. In specific, how the lofty goals of its Search origins were undercut by those anti-social, anti-user failures in Android and Play.
It's no surprise that Google is playing disingenuous games around providing access to competitors regarding web apps on Android, while simultaneously pushing to expand its control over app distribution. The Play team covet what Apple have, and far from exhibiting any self-awareness of their own culpability, are content to squander whatever brand reputation Google may have left in order to expand its power over software distribution.
Google is not creating moral distance between itself and Apple, or seeking to help developers build PWAs to steer around the easily-censored channels it markets, and totally coincidentally, taxes.8 Google is Apple's collaborator in capitulation. A moral void, trotting out the same, tired tactic of hiding behind Apple's skirt whenever questions about the centralising and authoritarian tendencies of App Store monopolies crop up. For 15 years, Android has been content to pen 1-pagers for knock-off versions of whatever Apple shipped last year, including authoritarian-friendly acquiescence.
Play is now the primary software acquisition channel for most users around the world, and that should cause our tech press to intensify scrutiny of these actions, but that's not how Silicon Valley's wealth-pilled technorati think, talk, or write. The Bay Area's moral universe extends to the wall of the privilege bubble, and no further. We don't talk about the consequences of enshittified, trickle-down tech, or even bother to look hard at it. That would require using Android and…like…eww.
Far from brave truth-telling, the tech press we have today treats the tech the other half (80%) use as a curio; a destination to gawp at on safari, rather than a geography whose residents are just as worthy of dignity and respect as any other. And that's how Google is getting away with shocking acts of despicable cowardice to defend a parallel proprietary ecosystem of gambling, scams, and shocking privacy invasion, but with a fraction of the negative coverage.
And that's a scandal, too.
FOOTNOTES
Does anyone doubt that Tim Apple's wishlist didn't also include a slap-on-the-wrist conclusion to the US vs. Apple?
And can anyone safely claim that, under an administration as nakedly corrupt as Donald Trump's, Apple couldn't buy off the DOJ? And what might the going rate for such policy pliability be?
I don't know Wiley Hodges, but the tone of his letter is everything I expect from Apple employees attempting to convince their (now ex-)bosses of anything: over-the-top praise, verging on hagiography, combined with overt appeals to the brand as the thing of value. This is how I understand Apple to discuss Apple to Apple, not just the outside world. I have no doubt that this sort of sickly sweet presentation is necessary for even moderate criticism to be legible when directed up the chain. Autocracies are not given to debate, and Apple is nothing if not internally autocratic.
His follow-up post is more open and honest, and that's commendable. You quickly grasp that he's struggling with some level of deprogramming now that he's on the outside and can only extend the benefit of the doubt towards FruitCo as far as the available evidence allows. Like the rest of us, he's discovering that Apple is demanding far more trust than its actions can justify. He's rightly disappointed that Apple isn't living up to the implications of its stated ideals, and that the stated justifications seem uncomfortably ad hoc, if not self-serving.
This discomfort stems from the difference between principle and PR.
Principles construct tests with which we must wrestle. Marketing creates frames that cast one party as an unambiguous hero. I've often joked that Apple is a marketing firm with an engineering side gig, and this is never more obvious than in the stark differences between communicated choices and revealed outcomes.
No large western company exerts deeper control over its image, prying deep into the personal lives of its employees in domineering ways to protect its brand from (often legitimate) critique that might undermine the message of the day. Every Apple employeed not named "Tim" submits to an authoritarian regime all day, every day. It's no wonder that the demands of power come so easily to the firm. All of this is done to maintain the control that allows Marketing to cast Apple's image in a light that makes it the obvious answer to "who should rule?"
But as we know, that question is itself the problem.
Reading these posts, I really feel for the guy, and wish him luck in convincing Apple to change course. If (as seems likely) it does not, I would encourage him to re-read that same Human Rights Policy again and then ask: "is this document a statement of principle or is it marketing collateral?" ⇐
The cultish belief that "it is good because we do it" is first and foremost a self-deception. It's so much easier to project confidence in this preposterous proposition when the messenger themselves is a convert.
The belief that "we should rule" is only possible to sustain among thoughtful people once the question "who should rule?" is deeply engrained. No wonder, then, that the firm works so dang hard to market its singular virtue to the internal, captive audience. ⇐
As I keep pointing out, Apple can make different choices. Apple could unblock competing browsers tomorrow. It could fully and adequately fund the Safari team tomorrow. It could implement basic features (like install banners) that would make web apps more viable tomorrow. These are small technical challenges that Apple has used disingenuous rhetoric to blow out of all proportion as it has tried to keep the web at bay. But if Apple wanted to be on the side of the angels, it could easily provide a viable alternative for developers who get edited out of the App Store. ⇐
Control over search entry points is the purest commercial analogue in Android to green/blue messages on iOS. Both work to dig moats around commodity services, erecting barriers to switching away from the OS's provider, and both have been fantastically successful in tamping down competition. ⇐
It will never cease to be a scandal that Android's singular success metric in the Andy Rubin years was “activations.” The idea that more handsets running Android was success is a direct parallel to Zuckian fantasies about “connection” as an unalloyed good.
These are facially infantile metrics, but Google management allowed it to continue well past the sell-by date, with predictably horrendous consequences for user privacy and security. Play, and specifically the hot-potato of "GMS Core" (a.k.a., "Play Services") were tasked with covering for the perennially out of date OSes running on client devices. That situation is scarcely better today. At last check, the ecosystem remains desperately fragmented, with huge numbers of users on outdated and fundamentally insecure releases. Google has gone so far as to remove these statistics from its public documentation site to avoid the press asking uncomfortable questions. Insecurity in service of growth is Android's most lasting legacy.
Like Apple, Andy Rubin saw the web as a threat to his growth ambitions, working to undermine it as a competitor at every step. Some day the full story of how PWAs came to be will be told, but suffice to say, Android's rabid acolytes within the company did everything they could to prevent them, and when that was no longer possible, to slow their spread. ⇐
Don't worry, though, Play doesn't tax developers as much as Apple. So Google are the good guys. Right?