Reading List
The most recent articles from a list of feeds I subscribe to.
The Performance Inequality Gap, 2026
The Budget, 2026 Edition
Let's cut to the chase, shall we? Updated network test parameters for 2026 are:
- 9 Mbps downlink
- 3 mbps uplink
- 100 millisecond RTT
Regarding devices, my updated recommendations are the Samsung Galaxy A24 4G (or equivalent) and the HP 14. The goal of these recommendations is to emulate a 75th percentile user experience, meaning a full quarter of devices and networks will perform worse than this baseline.
Plugging these parameters into the updated budget calculator, we can derive critical-path resource thresholds for three and five second page load targets. Per usual, we consider pages built in two styles: JS-light, where only 15% of critical-path bytes are JavaScript, and JS-heavy, comprised of 50% JavaScript:
| Time | JS-light (MiB) | JS-heavy | ||||
|---|---|---|---|---|---|---|
| Total | JS | Other | Total | JS | Other | |
| 3 sec | 2.0 | 0.3 | 1.7 | 1.2 | 0.62 | 0.62 |
| 5 sec | 3.7 | 0.57 | 3.2 | 2.3 | 1.15 | 1.15 |
Note: Budgets account for two TLS connections.
Many sites initiate more early connections, reducing time available to download resources. Using four connections cuts the three-second budget by 350 KiB, to 1.5 MiB / 935 KiB. The five-second budget loses nearly half a megabyte, dropping to 3.2 / 1.9 MiB.
It pays to adopt H/2 or H/3 and consolidate connections.
These budgets are extremely generous. Even the target of three seconds is lavish; most sites should be able to put up interactive content much sooner for nearly all users.
Meanwhile, sites are ballooning. The median mobile page is now 2.6 MiB, blowing past the size of DOOM (2.48 MiB) in April. The 75th percentile site is now larger than two copies of DOOM, and P90+ sites are more than 4.5x larger, and sizes at each point have doubled over the past decade. Put another way, the median mobile page is now 70 times larger than the total storage of the computer that landed men on the moon.
An outsized contributor to this bloat comes from growth in JavaScript. Mobile JavaScript payloads have more than doubled since 2015, reaching 680 KiB and 1.3 MiB at P50 and P75 (respectively). This compositional shift exacerbates latent inequality and hurts businesses trying to grow.
When JavaScript grows as a proportion of critical-path resources, the impact of higher CPU cost per byte reduces budgets. This coffin corner effect explains why image and CSS-heavy experiences perform better byte-for-byte than sites built with the failed tools of frontend's lost decade.
Indeed, the latest CrUX data shows not even half of origins have passing Core Web Vitals scores for mobile users. More than 40% of sites still perform poorly for desktop users, and progress in both cohorts is plateauing:
Core Web Vitals Pass Rate (%)
Source: the Chrome User Experience Report via BigQuery.
This is a technical and business challenge, but also an ethical crisis. Anyone who cares to look can see the tragic consequences for those who most need the help technology can offer. Meanwhile, the lies, half-truths, and excuses made by frontend's influencer class are in defence of these approaches are, if anything, getting worse.
Through no action of their own, frontend developers have been blessed with more compute and bandwidth every year. Instead of converting that bounty into delightful experiences and positive business results, the dominant culture of frontend has leant into self-aggrandising narratives that venerate failure as success. The result is a web that increasingly punishes the poor for their bad luck while paying developers huge salaries to deliver business-undermining results.
Nobody comes to work wanting to do a bad job, but low-quality results are now the norm. This is a classic case of under-priced externalities created by induced demand from developers and PMs living in a privilege bubble.
Embedded in this year's estimates is hopeful news about the trajectory of devices and networks. Compared with early 2024's estimates, we're seeing budget growth of 600+KiB for three seconds, and a full megabyte of extra headroom at five seconds.1
While this not enough to overcome continued growth in payloads, budgets are now an order of magnitude more generous than those first sketched in 2017. It has never been easier to deliver pages quickly, but we are not collectively hitting the mark.
To get back to a healthy, competitive web, developers will need to apply considerably more restraint. If past is prologue, moderation is unlikely to arise organically. It's also unhelpful to conceive of ecosystem-level failures as personal failings. Yes, today's frontend culture is broken, but we should not expect better while incentives remain misaligned.
Browsers, search engines, and developer tools will need to provide stronger nudges, steering users away from bloated sites where possible, and communicating the problem to decision-makers. This will be unpopular, but it is necessary for the web to thrive.
Recommended Test Devices and Settings
This series has continually stressed that today's P75 device is yesterday's mid-market Android, and that trend continues.
The explosive smartphone market growth of the mid 2010s is squarely in the rearview mirror, and so historical Average Selling Prices (ASPs) and replacement dynamics now dominate any discussion of fleet performance.
Hardware upgrade timelines are elongating. Previous estimates of 18 months for replacement on average is now too rosy, with the median smartphone now living longer than two years. P75 devices may be nearly 3 years old, and TechInsights estimates a 23.7% annual replacement rate.
With all of this in mind, we update our target test device to the Samsung Galaxy A24 4G, a mid-2023 release featuring an SoC fabbed on a 6nm process; a notable improvement over previous benchmark devices.
The A24 sold for less than the global Average Selling Price for smartphones at launch ($250 vs. $353). Because that specific model may be hard to acquire for testing, anything based on the MediaTek Helio G99 or Samsung Exynos 1330 will do just fine; e.g.:
Teams serious about performance should track the low-end cohort instead, sticking with previously acquired Samsung Galaxy A51's, or any late-model device from the Moto E range.2
For link-accurate network throttling, I recommend spending $3 for Throttly for Android. It supports custom network profiles, allowing you to straightforwardly emulate a 9/3/100 network. DevTools throttling will always be inaccurate, and this is the best low-effort way to correctly condition links on your primary test device.
Desktops are not the limiting factor, but it's still helpful to have physical test devices. You should not spend more than $250 (new) on a low-end test laptop. It should have a Celeron processor, eMMC storage, and run Windows. The last point is not an effort to sell more licences, but rather to represent the nasty effects of defender, NTFS, and parasitic background services on system performance. Something like the HP 14 dq3500nr.
Desktop network throttling remains fraught, and the best solutions are still those from Pat Meenan's 2016 article announcing winShaper.
The Big Story Is Still Low-End Android
What we see in our recommended test setups is an echo of the greatest influence of the past decade on smartphone performance: the spread of slow, ever-cheaper Androids with ageing chipsets, riding the cost curve downward, year-on-year.
The explosive growth of this segment drove nearly all market growth between 2011 and 2017. Now that smartphones have reached global saturation, flat sales volumes mirror the long-term trends in desktop device ownership:
At no point in the past dozen years has iOS accounted for more than 20% of new-device shipments. Quarter-by-quarter fluctuations have pushed that number as high as 25% when new models are released, but the effect never lasts.
Most phones — indeed, most computers — are 24+ month old Androids. This is the result of a price segmented market: a preponderance of smartphones sold for more than $600USD (new, unlocked) are iPhones, and the overwhelming majority of devices sold for less than that are slow Androids.
Average Selling Prices
New, unlocked, worldwide, in nominal dollars.
Source: IDC, Statista, and Counterpoint Research.
Not adjusted for inflation. Full year 2025 ASP is extrapolated from the first several quarters and may be revised upward.
The “i” in “iPhone” stands for “inequality.”
Global ASPs show the low-end isn't just alive-and-well, it's many multiples of high-end device volume. To maintain a global ASP near $370, an outsized number of cheaper Androids must be sold for every $1K (average) iOS device.
The Landscape
To understand how the payload budget estimate is derived, we need to peer deeper into the device and network situation. Despite huge, unpredictable shocks in the market (positive: Reliance Jio; negative: a pandemic), the market trends this series tracks have allowed us to forecast accurately.
75th+ percentile users are almost always on older devices, meaning we don't need to divine what will happen, just remember the recent past.
Mobile
The properties of today's mobile devices define how our sites run in practice. From the continued prevalence of 4G radios, to the shocking gaps in CPU performance, the reality of the modern web is best experienced through real devices. The next best way to understand it is through data.
Per usual, I've built single and multicore CPU performance charts for each price point; we track four groups:
- Fastest iOS device
- Fastest Android
- Mid-range Android ($300-350)
- Low-end Android ($100-150)
The last two cohorts account for more than 2/3 of new device sales:
The performance of JavaScript-based web experiences is heavily correlated with single-core speed, meaning that the P75 device places a hard cap on the amount of JavaScript that is reasonable for any website to rely on.
Depressingly, budget device CPUs have not meaningfully improved since 2022. But the nearly-identical SoCs in each year's device are getting cheaper. Reduced bill-of-materials costs mean declining retail prices for low-spec phones.
Meanwhile, the high end continues to pull away. As previewed in prior instalments of this series, top-end Androids are beginning to close the massive performance lead that Apple's A-series chips have opened up over the past decade. This is largely thanks to Qualcomm and MediaTek finally starting to address the cache-gap I have harped on since 2016:
Total SoC Cache (MiB)
L1 + L2 + L3 + System-level Caches
Source: Wikipedia, vendor documentation, and author's caclulations.
Some cache sizings are estimates, particularly in the Android ecosystem, where vendor documentation is lacking. Android SoC vendors have a habit of implementing the smallest values ARM allows for a licensed core design.
The latest iPhone chip (A19 Pro) features truly astonishing amounts of cache. For a sense of scale, the roughly 50 MiB of L1, L2, and L3 cache in an iPhone 17 Pro provides 8.3 MiB of cache per core. This is more than double the per core cache of Intel's latest high-end desktop part, the 285K, which provides a comparatively skimpy 3.3 MiB combined cache per core.
The gobsmacking caches of A-series parts allow Apple to keep the beast fed, leading to fewer stalls and more efficient use of CPU time. This advantage redounds to better battery life because well-fed CPUs can retire work faster, returning to lower-power states sooner.
That it has taken this long for even the top end of the Android ecosystem to begin to respond is a scandal.
Single-Core performance per $
Source: GSMArena, Geekbench, and vendor documentation.
Geekbench 6 points per dollar at each price point over time. Prices are MSRP at date of launch.
If there's good news for buyers of low-end devices, it's that performance per dollar continues to improve across the board.
Frustratingly, though, low-end single core performance is still 9x slower than contemporary iPhones, and mid-tier devices remain more than 3.5x slower:
Multicore performance tells a similar tale. As a reminder, this metric is less correlated with web application performance than single-core throughput:
Geekbench 6 Multi-Core scores
Source: GSMArena, Geekbench.
When Geekbench reports multiple scores for an SoC, recent non-outlier values are selected.
Performance per dollar looks compelling for the lower tier parts, but recall that they are 1/3 to 1/5 the performance:
Multi-Core performance per $
Source: GSMArena, Geekbench, and vendor documentation.
Market Trends
What makes iPhones so bloody fast, while Androids have languished? Several related factors in Apple's chip design and development strategy provided a commanding lead over the past decade:
-
In-house tuning of all ARM-designed cores, thanks to a long-ago negotiated Architecture licence, leading to aggressive cache sizings.
-
Concentration in fewer SoC SKUs enabled focused yield optimisation.
-
Large, early orders with TSMC secured exclusive access to the latest fabrication nodes.
Android vendors, meanwhile, have spread their SoC development budgets in penny-wise, pound-foolish fashion. Even Google and Samsung's in-house efforts have failed to replicate the virtuous effects of alignment that Apple's discipline in CPU design have delivered.
Process Node (nm)
Source: GSMArena, Wikipedia, and vendor documentation.
Feature sizes are a fudge below 10 nanometres, but marketing names usually reflect real increases in transistor density and frequencies, along with reductions in power use. High-end Android and iOS parts have generally been produced on comparable nodes, with Apple's lead lasting less than a year. But that's less than half the story. Android SoC vendors have avoided adding competitively sized caches, dedicating the same mm^2 on die to higher core counts and on-die radios. From a performance perspective, this has been catastrophic:
Core Counts
Source: GSMArena, Wikipedia, and vendor documentation.
Core counts are a headline fixture of device marketing, but even the cheapest phones have featured eight (slow, memory-starved) cores since 2019. Speed comes from other properties; namely appropriate cache sizing, memory latency, frequency scaling, and chip architecture. Apple's core-count restraint and focus on other aspects should have been a lesson to the entire industry long before 2024.
Smaller transistors also allow for higher peak frequencies, giving Apple a perennial advantage thanks to early access to TSMC's latest, smallest, power-sipping processes:
Maximum CPU Frequency (gHZ)
Source: GSMArena, Wikipedia, and vendor documentation.
Maximum advertised frequency of the fastest on-package core.
These trade-offs have allowed Apple to charge a premium for devices which no other vendor can justify:
Prices vs. Average Selling Price (ASP)
Source: GSMArena, IDC, Statista, and Counterpoint Research.
Prices are new, unlocked MSRP at launch.
Global ASPs are conservative estimates; some research groups estimate values 10-15% higher, but the trends are consistent. The premium end continues to pull away in both price and performance, dragging ASPs slightly upward. Meanwhile, high-end devices continue to be outsold more than 3:1 by low-end phones that aren't getting faster. Those slow devices are, however, getting increasingly inexpensive, hovering near $100. That's 1/10th the price of the cheapest iPhone with Apple's fastest chip.
Rise of the Refurbs
Thanks to market trends, the recent-spec iPhones many web developers carry don't even represent the experience of most iOS users.
It may seem incongruous that the ASP of iOS devices is bumping up against the $1K mark while most developed countries experience bouts of slow growth post-pandemic, along with well-documented cost-of-living crises among middle-class buyers.
One possible solution to this riddle is that Android sales remain strong, as Apple is cordoned into the segment of the market it can justify to shareholders with a 30% net margin. Another is growth in the resale market, particularly for iOS devices. The extended longevity of these phones thanks to premium components and better-than-Android software support lifetimes has created a vibrant and growing market below the $400 price point for used and refurbished smartphones.
This also helps to explain the flat-to-slightly-declining market for new smartphones, as refurbished devices accelerate past $40BN/yr in sales.
What does this mean? We should expect, and see, higher-than-first-sale volumes of iOS use in various aggregate statistics. Wealth effects have historically explained much of this, but the scale is growing. Refurbishment and resale are now likely to be driving growing discontinuity in the data:
Not only is an iPhone 17 Pro not real life in the overall market, it isn't even real life in the iOS market any more.
Desktop
The situation on desktop is one of overwhelming stasis, modulo the Windows 11 upgrade cycle resulting from Windows 10's EOL at the end of 2025. That has driven an unusually strong one-off cycle of upgrades that will reverberate through the data in coming years.
This impetus to upgrade is cross-pressured by pricing headwinds. Economic uncertainty, tariffs, scrambled component pricing from AI demand for silicon of all sorts, and ever-longer device replacement cycles all mean that new PCs may not provide more than incremental performance gains. As a result, an increase in recent worldwide PC Average Selling Prices from ~$650 in our previous estimate to ~$750 in 2025 (per IDC) may not indicate premiumisation. In a globally connected economy, inflation comes for us all.
Overall, the composition and trajectory of the desktop market remains stable. Despite 2025's device replacement boomlet, IDC predicts stasis in “personal computing device” volumes, with growth bumping along at ±1-2% a year for the next 5 years, and landing just about where things are today. The now-stable baseline of ~410MM devices per year is predicted to be entirely flat into 2030.
Top-line things to remember about desktops are:
-
80% of “desktop” devices are laptops, tablets, and other battery-powered form-factors. The performance impact of thermal and power envelopes for these devices is drastic; many cores are spun down to low power states most of the time; symmetric multiprocessing is now a datacentre curio.
-
Flaky Wi-Fi, rather than wired Ethernet, is now the last mile to most desktops.
-
Desktops (including laptops) are only 15-18% of web-capable device sales; a pattern that has been stable for a decade.
Put another way: if you spend a majority of your time in front of a computer looking at a screen that's larger than 10", you live in a privilege bubble.
Evidence From the Edge User Base
Per previous instalments, we can use Edge's population-level data to understand the evolution of the ecosystem. As of late 2025, the rough breakdown looks like:
| Device Tier | Fleet % | Definition |
| Low-end | 30% | Either: <= 4 cores, or <= 4GB RAM |
| Medium | 60% | HDD (not SSD), or 4-16 GB RAM, or 4-8 cores |
| High | 10% | SSD + > 8 cores + > 16GB RAM |
Compared with the data from early 2024, we see some important shifts:
-
Low-end devices have fallen from ~45% to ~30% of the fleet.
-
Most growth is in the middle tier, growing from 48% to 60%.
-
The high-end is growing slowly.
Because the user base is also growing, it's worth mentioning that the apparent drop in the low-end is a relative change. In absolute terms, the ecosystem is seeing a slower absolute removal of low-spec machines. This matches what we should intuitively expect from incremental growth of the Edge user base, which is heavily skewed to Windows 11.
Older and slower devices likely constitute an even larger fraction of the total market, but may be invisible to us. Indeed, computers with spinning rust HDDs, <= 4GB of RAM, and <= 2 cores still represent millions of active devices in our data. Alarmingly, they have dropped by less than 20% in absolute terms over the past six months. And remember, this isn't even the low end of the low end, as our stats don't include Chromebooks.
Building to the limits of ”feels fine on my MacBook Pro” has never been realistic, but in 2025 it's active malpractice.
Networks
The TL;DR for networks for 2026 is that the P75 connection provides 9Mbps of downlink bandwidth, 3Mbps upload, with 100ms of RTT latency.
This 9Mbps down, 3Mbps up configuration represents sizeable uplift from the 2024/2025 guidance of 7.2Mbps down, 1.4Mbps up, with 94ms RTT, but also a correction. 2024's estimate for latency was probably off by ~15%, and should have been set closer to 110ms.
Global or ambitious firms should target networks even slower than 9/3/100 in their design parameters; the previous 5/1/28 “cable” network configuration is still worth building for, but with an upward adjustment to latency (think 5/1/100). Uneven service has the power to negatively define brands, and the way to remain in user's good graces digitally is to build for the P95+ user. Smaller, faster sites that serve this cohort well will stand out as being exceptional, even on the fastest networks and devices.
Looking forward into 2026 and 2027, the hard reality is that networks remain slower than developers expect and will not improve quickly.
Upload speeds, in particular, remain frustratingly slow, as wider upload channels correlate with faster early-session downloads. Only users on the fastest 10% of networks see downlink:uplink bandwidth rising above 2.5:1 ratios, even under ideal conditions.3
Owing to physics, device replacement rates, CAPEX budgets of mobile carriers, inherent variability in mobile networks (vs. fixed-line broadband), and worldwide regulatory divergence, we should expect experiences to be heavily bottlenecked by networks for the foreseeable future.
Previous posts in this series have documented improvements, but as we have been saying since 2021, 4G is a miracle, and 5G is a mirage.
This will remain true for at least another three years, with bandwidth and latency improving only incrementally. Sites that want to reach their full potential, if only to beat the competition, must build to budgets that are inclusive for folks on the margins. When it comes to web performance, doing well is the same as doing good.
Bandwidth
Bandwidth numbers are derived from Cloudflare's incredible Radar dataset.4 Looking at (downlink) bandwidth trends over the past year, we see stasis:
So where does the improvement in our estimate come from? Looking back to 2024, we see (predicted) gains emerge, but note their small absolute size:
The gap between P25 and P75 downlinks was 15 Mbps at the start of 2024 and has grown to 21 Mbps at the end of 2025; an increase of 40%. Meanwhile, bottom quartile are only 28% faster, improving from ~7 to 9 Mbps. In absolute terms, wealthier users saw 3x as much absolute gain.
The performance inequality gap is growing at the network layer too.
Latency
Latency across networks (RTTs) is improving somewhat, with a nearly 10% decrease at P75 over the past year, from ~110ms to ~100ms. Small improvements on faster links (P50, P25) look to be in the 5% range:
Given the variability at higher percentiles, we'll stick with a 100ms target for the 2026 calendar year, although we should expect slight gains.
Underlying these improvements are datacentre build outs in traditionally underserved regions, undersea cable completions, and cellular backhaul improvements from the 5G build out. Faster client CPUs and radios will also contribute meaningfully and predictably.5
All of these factors will continue to incrementally improve, and we predict another ~5% improvement (to 95ms) at P75 for 2027.
Looking Forward
Gains will be modest for both bandwidth and latency over the next few years. The lion's share of web traffic is now carried across mobile networks, meaning that high percentiles represent users feeling the confounding effects of cellular radios, uneven backhaul, coverage gaps, and interference from the built environment. Those factors are hardest and slowest to change, involving the largest number of potential long tent poles.
As discussed in previous instalments, the world's densest emerging markets reached the smartphone tipping point by 2020, displacing feature phones for most adults. More recently, we have approached saturation; the moment at which smartphone sales are dominated by replacements, rather than first-time sales in those markets. Affordability and literacy remain large challenges to get the residually unconnected online. The 2025 GSMA State of Mobile Internet Connectivity report has a full chapter on these challenges (PDF), complete with rich survey data.
What we see in now-stable smartphone shipment volumes primes the pump for improvements in device performance. First-time smartphone users invest less in their devices (relative to income) as the value may be unclear; users shopping for a second phone have clearer expectations and hopes. Having lived the frustration of slow sites and apps, an incrementally larger set of folks will be willing to pay a bit more to add 5G modems to their next phone.
This effect is working its way down the price curve from the ultra-high-end in 2020 to the mid-tier in 2024, but it has yet to reach the low end. Our latest low-end specimen — 2025's Moto E15 — still features a 4G radio. Because of the additional compute requirements that 5G places on SoCs, we will expect to see process node and CPU performance increases as 5G costs fall far enough to impact the low-price band.
Today, devices with otherwise similar specs and a 5G radio still command a considerable premium. The Galaxy A16 5G was introduced at a 20% premium over the 4G version, mirroring the mid-market dynamic from 2022 and 2023 where devices were offered in “regular” and (pricier) 5G versions. It will take a transition down the fab node scale like we saw for mid-market SoCs in 2022 and 2024 to make 5G a low-end table-stakes feature. I'm not holding my breath. Given the current market for chips of all types, we may be seeing low-spec SoCs produced 12nm for several more years, reprising the long-term haunting of Android by A53 cores produced at 28nm from 2012 to 2019.
Content Trends
The budget estimates generated in this series may seem less relevant now that tools like Core Web Vitals provide nuanced, audience-specific metrics that allow us to characterise important UX attributes. But this assumption is flawed thanks to the effects of deadweight losses and the biases inherent in those metrics.
Case-in-point: last year CWV deprecated FID and replaced it with the (better calibrated) INP metric. This predictably dropped the CWV pass rates of sites built on desktop-era JavaScript frameworks like React, Angular, and Ember:
RUM data, in isolation, undercounts the opportunity costs of slow and bloated experiences. Users have choices, and lost users do not show up in usage-based statistics.
A team I worked with this year saw these effects play out directly in their CrUX data:
This high-profile, fast-growing site added nearly 100 KiB of critical-path JavaScript per month from January to June. The result? A growing share of visits from desktop devices, and proportionally fewer mobile users every month. Once we began to fumigate for dead code and overeager preloading, mobile users returned.
These effects can easily overcome other factors, particularly in the current era of JavaScript bundle hyper-growth:
Are SPAs Working?
Perhaps the most important insight I spotted while re-immersing myself in the data for this post were the implications of these charts from the RUM Archive:
The top-line takeaway is chilling: sites that are explicitly designed as SPAs, and which have intentionally opted in to metrics measurement around soft-navigations are seeing one (1) soft-navigation for every full page load on average.
The rinky-dink model we discussed last year for the appropriateness of investing in SPA-based stacks is a harsh master, defining average session performance as the sum of interaction latencies, including initial navigation, divided by the total number of interactions (excluding scrolling):
If the RUM Archive's data is directionally correct, at an ecosystem level, for both mobile and desktop. Sessions this shallow make a mockery of the idea that we can justify more up-front JavaScript to deliver SPA technology, even on sites with reason to believe it would help.
In private correspondence, Michal Mocny shared an early analysis from data collected via the Soft Navigations API Origin Trial. Unlike the Akamai mPulse data that feeds the RUM Archive, Chromium's data tracks interactions from all sites, not only those that have explicitly opted-in to track soft navigations, providing a much wider aperture. On top-10K origins, Chrome is currently observing values for between 1.6 and 2.2, depending on how the analysis is run, or 0.8-1.1 additional soft navigations per initial page load.
It's difficult to convey the earth-shattering magnitude of these congruent findings. Under these conditions, the amount of JavaScript a developer can justify up-front to support follow-on in-page navigations is de minimis.6
This should shake our industry to the bone, driving rapid reductions in emitted JavaScript. And yet:
Analysis and Conclusions
This series has three main goals:
-
Provide a concrete set of page-weight targets for working web developers.
-
Arm teams with an understanding of how the client-side computing landscape is evolving.
-
Show how budgets are constructed, giving teams tools to construct their own estimates from their own RUM and market data.
This is not altruism. I want the web to win. I began to raise the alarm about the problems created by a lack of adaptation to mobile's constraints in 2016, and they have played out on the same trend-line I feared. The web is now decisively losing the battle for relevance.
To reverse this trend, I believe several (currently unmet) conditions must be fulfilled:
-
Reaching users via the web must be cost-competitive with other digital channels, meaning that re-engagement features like being on the home screen and in the notification tray must work correctly.
-
The web must deliver the 80/20 set of critical capabilities for most of the important JTBDs in modern computing, but in a webby (safe, privacy-respecting by default) way.
-
Web experiences, on average, have to feel responsive enough that the idea of tapping a link doesn't inspire dread.
But the web is not winning mobile. Apple, Google, and Facebook nearly extinguished the web's potential to disrupt their cosy arrangement. Preventing the web from breaking out — from meaningfully delivering app-like experiences outside an app store — is essential to maintaining dominance. But some are fighting back, and against the odds, it's working.
What's left, then, is the subject of this series. Even if browser competition comes to iOS and competitors deliver the features needed to make the web a plausible contender, the structure of today's sites is an impediment to a future in which users prefer the web.
Most of the world's computing happens on devices that are older and slower than anything on a developer's desk, and connected via networks that contemporary “full-stack” developers don't emulate. Web developers almost never personally experience these constraints, and over frontend's Lost Decade, this has created an out-of-touch privilege bubble that poisons the products of the teams that follow the herd, as well as the broader ecosystem.
That's bleak, but the reason I devote weeks to this research each year isn't to scold. The hope is that actionable targets can help shape choices, and that by continuing to stay engaged with the evolution of the landscape, we can see green shoots sprout.
If we can hold down the rate of increase in critical-path resource growth it will give hardware progress time to overtake our excesses. If we make enough progress in that direction, we might even get back to a place where the web is a contender in the minds of users.
And that's a future worth working for.
FOOTNOTES
I really do try to avoid being an unremitting downer, but the latest device in our low-cost cohort — the Motorola E15 — is not an improvement in CPU benchmarks from last year.
More worrying, no device in that part of the market has delivered meaningful CPU gains since 2020. That's five years of stasis, and a return to a situation where new low-end devices are half as fast as their mid-tier contemporaries. Even as process node improvements trickle down to the $300-350 price bracket, the low end is left further and further behind.
As the wider Android ecosystem experienced from 2015-2020, devices with the same specs are getting cheaper, but not better. This allows them to open new markets and sell in massive numbers, helping to prop up overall annual device sales, even as devices last longer and former growth markets (India, Indonesia, etc.) hit smartphone saturation. This is reflected in the low-end models finally sinking below $100USD new, unlocked, at retail. But to hit this price-point, they deliver performance on par with 2019's mid-tier Galaxy A50; a phone whose CPU was fabbed on a smaller process node than today's latest low-end phones.
Services trying to scale, and anyone trying to build for emerging markets, should be anchoring P90 or P95, not P75. For serious shops, the target has not moved much at all since 2019.
This reality alone is enough to justify rejection of frameworkist hype-shilling, without even discussing the negative impacts of JS-first stacks on middle-distance team velocity and site maintenance costs. ⇐
Because low-end and medium-tier devices were so similar until very recently, this differentiation wasn't necessary. But progress in process nodes does eventually trickle down. The mid-tier began to see improvement away from the (utterly blighted) 28nm node in 2017, a mere 4 years after the high-end decamped for greener pastures. The low-end, meanwhile, was trapped in that register until 2020, nearly a decade after 28nm was first introduced. Since then, the mid tier has tracked process node scaling with a 2-3 year delay, while the low end has gotten stuck since 2021 at 12nm.
The failure to include meaningful amounts of cache in Android SoCs levelled out low-end and medium tier performance until 2023, but transistor shrinkage and architecture improvements at ~$350 are now decoupling the performance of these tiers once again, and we should expect to see them grow further apart in coming years, creating a worrying gap between P90+ and P75 devices. ⇐
Cloudflare's worldwide bandwidth data only provides downlink estimates, and so to understand the downlink:uplink bandwidth ratio, I turned instead to their API and queried CF's speed test data, which provide fuller histograms.
These tests are explicitly run by users, meaning they occur under synthetic conditions and likely with a skew towards best-case network performance. They also report maximums over a session, rather than loaded network behaviour, which explains the divergence between the higher speed values reported there than the more realistic "Internet Quality Index" dataset we use for primary bandwidth and latency analysis.
The data we can derive from it is therefore much rosier, but it does give us a sense for downlink/uplink ratios (bitrates are in Mbps):
%-ile Download Upload Ratio 20th 5 15 3 25th 25 10 2.5 30th 10 30 3 40th 20 50 2.5 median 85 35 2.4 60th 125 50 2.5 70th 220 85 2.6 75th 280 100 2.8 80th 345 140 2.5 90th 525 280 1.9 Because the data is skewed in an optimistic direction (thanks to usage biases towards wealth, which correlates with high-performance networks), we pick a 3:1 ratio in our global baseline.
Despite variance in the lower percentiles, it is reasonable to expect tough ratios in the bottom quartile given the build-out properties of various network types. These include:
- Asymmetries in cable and DSL channel allocations.
- Explicit frequency/bandwidth allocation in cellular networks.
- Radio power lopsidedness vs. the base stations they connect to, particularly for battery-powered devices.
Even new networks like Starlink are spec'd with 10:1 or greater ratios. Indeed, despite being "fast", the author's own home fixed-line connection has a ratio grater than 30:1. We should expect many such discrepancies up and down the wealth spectrum.
A 4:1 or 5:1 ratio is probably justified, and previous estimates used 5:1 ratios for that reason. Lacking better data, going with 3:1 is a judgement call, and I welcome feedback on the choice. ⇐
Why am I relying on Cloudflare's data?
Google, Microsoft, Amazon, Fastly, Akamai, and others obviously have similar data (at least in theory), but do not publish it in such a useful and queryable way. That said, these estimates are on trend with my priors about the network situation developed from many sources over the years (including non-public datasets).
There is a chance Cloudflare's data is unrepresentative, but given their CDN market penetration, my primary concern is that their data is too rosy, rather than too generous. Why? Geographic and use-based bias effects.
The wealthy are better connected and heavier internet users, generating more sessions. Better performance of experiences increases engagement, so we know CF's data contains a bias towards the experiences of the affluent. This potentially blinds us to large fraction of the theoretical TAM and (I think) convincingly argues that we should be taking a P90 value instead of P75. However, we stick with P75 for two reasons:
- It would be incongruent to cite P90 this year without first introducing it in previous installations.
- A lack of explicitly triangulating data from the current network environment makes it challenging to judge the magnitude of use-based biases in the data.
Thankfully, Cloudflare also produces country-level data. We can use this to cabin the scale of potential issues in global data. Here, for instance, are the P75 network situations for a few populous geos that every growth-oriented international brand must consider in descending downlink speed:
Geo @ P75 Down Up RTT Pop (MM) UK 21 7 34 69 USA 17 5.5 47 340 Brazil 12 4 60 213 Global 9 3 100 Indonesia 6.4 2.1 75 284 India 6.2 2.1 85 1,417 Pakistan 4 1.3 130 241 Nigeria 3.1 1 190 223 Underweighting the less-affluent is a common bias in tech, and my consulting experience has repeatedly reconfirmed what Tammy Everts writes about when it comes to the opportunities that are available when sites push past performance plateaus.
There is no such thing as “too fast”, but most teams are so far away from minimally acceptable results that they have never experienced the huge wins on the other side of truly excellent and consistent performance. Entire markets open up when teams expand access through improved performance, and wealthy users convert more too.
It's this reality that lemon vendors have sold totally inappropriate tools into, and the results remain shockingly poor. ⇐ ⇐
As we mentioned in the last instalment, improvements in mid-tier and low-end mobile SoCs are delivering better network performance independent of protocol and spectrum improvements.
Modern link-layer cell and Wi-Fi stacks rely heavily on client-side compute for the digital signal processing necessary to implement advanced techniques like MIMO beam forming.
This makes the device replacement rates doubly impactful, even within radio generations and against fixed channel capacity. As process improvements trickle down (glacially) to mid-tier and low-end SoCs, the radios they contain also get faster processing, improving latency and throughput, ceteris paribus. ⇐
The RUM Archive's soft-to-hard navigations ratio and the early data from the Chromium Soft Navigations Origin Trial leave many, many questions unanswered including, but not limited to:
- What's the distribution?
- Globally: do some SPA-premised sites have many more or many fewer soft-navigations? Are only a few major sites pushing the ratios up (or down)?
- Locally: can we characterise user's sessions to understand what fraction trigger many soft-navigations per session?
- Do other data sources agree?
- Will the currently-running Origin Trial for Soft Navigations continue to agree as the reach grows?
- Can other RUM vendors validate or refute these insights?
- What about in-page changes not triggering URL updates?
- How should infinite-scrolling be counted?
- We should expect Chromium histogram data to capture more of this vs. the somewhat explicit instrumentation of mPulse, driving up soft-navigations per hard navigation. Do things stay in sync in these data sets over time?
Given the scale of the mystery, a veritable stampede of research in the web performance community should follow. I hope to see an explosion of tools to guide teams toward the most appropriate architectures based on comparative data within their verticals, first-party RUM data about session lengths, distribution mono/bi/tri/multi-modality of sessions, and other situated factors.
The mystery I have flicked at in the past is now hitting us smack in the face. Will we pay attention? ⇐
- What's the distribution?
The App Store Was Always Authoritarian
And now we see it clear, like a Cupertino sunrise bathing Mt. Bielawski in amber: Apple will censor its App Store at the behest of the Trump administration without putting up a fight.
It will twist words into their antipodes to serve the powerful at the expense of the weak.
To better serve autocrats, it will talk out both sides of its mouth in ways it had previously reserved for dissembling arguments against threats to profits, like right-to-repair and browser choice.
They are, of course, linked.
Apple bent the knee for months, leaving many commentators to ask why. But the reasons are not mysterious: Apple wants things that only the government can provide, things that will defend and extend its power to extract rents, rather than innovate. Namely, selective exemption from tariffs and an end to the spectre of pro-competition regulation that might bring about real browser choice.
Over the past few weeks, Tim Apple got a lot what he paid for,1 with the full weight of the US foreign and industrial policy apparatus threatening the EU over DMA enforcement. This has been part of a full-court press from Cupertino. Apple simultaneously threatened the EU while rolling out fresh astroturf for pliant regulators to recline on. This is loud, coordinated, and calculated. But calculated to achieve what? Why is the DMA such a threat to Apple?
Interoperability.
The DMA holds the power to unlock true, safe, interoperability via the web. Its core terms require that Apple facilitate real browser engine choice, and Apple is all but refusing, playing games to prevent powerful and safe iOS browsers and the powerful web applications they facilitate. Web applications that can challenge the App Store.
Unlike tariffs, which present a threat to short-term profits through higher costs and suppression of upgrades in the near term, interoperability is a larger and more insidious boogeyman for Apple. It could change everything.
Who's Afraid of the Big Bad Web?
Apple's profits are less and less attributable to innovation as “services” revenue swells Cupertino's coffers out of all proportion to iPhone sales volume. “Services” is code for rent extraction from captive users and developers. If they could acquire and make safe apps outside the App Store, Apple wouldn't be able to take 30% from an outlandishly large fraction of the digital ecosystem's wealthiest players.
Apple understands browser choice is a threat to its rentier model. The DMA holds the potential for developers to finally access the safe, open, and interoperable web technologies that power most desktop computing today. This is a particular threat to Apple, because its class-leading hardware is perfectly suited to running web applications. All that's missing are browsers that aren't intentionally hobbled. This helps to explain why Apple simultaneously demands control over all browser technology on iOS while delaying important APIs, breaking foundational capabilities, and gaslighting developers about Apple's unwillingness to solve pressing problems.
Keeping capable, stable, high-quality browsers away from iOS is necessary to maintain the App Store's monopoly on the features every app needs. Keeping other software distribution mechanisms from offering those features at a lower price is a hard requirement for Cupertino's extractive business model. The web (in particular, PWAs) present a worst-case scenario.
Unlike alternative app stores that let developers decouple distribution of proprietary apps from Apple's App Store, PWAs further free developers from building for each OS separately, allowing them to deliver apps though a zero-cost platform that builds on standards. And that platform doesn't feature a single choke point. For small developers, this is transformative, and it's why late-stage Apple cannot abide laws that create commercial fairness and enable safe, secure, pro-user alternatives.
Dots Unconnected
This is what Apple is mortgaging its brand (or, if you prefer, soul) to prevent: a world where users have a real choice in browsers.
Horrors.
Apple is loaning its monopoly on iOS software to yet another authoritarian regime without a fight, painting a stark contrast: when profits are on the line, Cupertino will gaslight democratic regulators and defy pro-competition laws with all the $1600/hr lawyers Harvard can graduate. And when it needs a transactional authoritarian's help to protect those same profits, temporarily2 lending its godlike power over iOS to censor clearly protected speech isn't too high a price to ask. Struggle for thee, but not for me.
The kicker is that the only alternative for affected users and developers is Apple's decrepit implementation of web apps; the same platform Cupertino serially knee-caps to deflect competition with its proprietary APIs.
It is no exaggeration to say the tech press is letting democracy down by failing to connect the dots. Why is Apple capitulating? Because Apple wants things from the government. What are those things? We should be deep into that debate, but our reportage and editorial classes cannot grasp that A precedes B. The obvious answers are also the right ones: selective protection from tariffs, defanged prosecution by the DOJ, and an umbrella from the EU's democratic, pro-competition regulation.
The Verge tiptoed ever so close to getting it, quoting letters that former Apple executives sent the company:3
I used to believe that Apple were unequivocally ‘the good guys,’” Hodges writes. “I passionately advocated for people to understand Apple as being on the side of its users above all else. I now feel like I must question that.”
This is a clue; a lead that a more thoughtful press and tech commentariat could use to evaluate the frames the parties deploy to their own benefit.
"Who should rule?"
The tech press is failing to grasp the moral stakes of API access. Again and again they flunk at connecting boring questions of who can write and distribute programs for phones to urgent issues of power over publication and control of devices. By declining to join these threads, they allow the unearned and increasingly indefensible power of mobile OS vendors to proliferate. The urgent question is how that power can be attenuated, or as Popper put it:
We must ask whether…we should not prepare for the worst leaders, and hope for the best. But this leads to a new approach to the problem of politics, for it forces us to replace the question: "Who should rule?" by the new question: "How can we so organize political institutions that bad or incompetent rulers can be prevented from doing too much damage?"
But the tech press does not ask these questions.
Instead of questioning why Apple's OS is so fundamentally insecure that an App Store is necessary, they accept the ever-false idea that iOS has been relatively secure because of the App Store.
Instead of confronting Apple with the reality that it used the App Store to hand out privacy-invading APIs in undisciplined ways to unscrupulous actors, it congratulates Cupertino on the next episode of our nightly kayfabe. The links between Apple's monopoly on sensitive APIs and the growth of monopolies in adjacent sectors are rarely, if ever, questioned. Far too often, the tech press accepts the narrative structure of Apple's marketing, satisfying pangs of journalistic conscience with largely ineffectual discussions about specific features that will not upset the power balance.
Nowhere, e.g., in The Verge's coverage of these letters is there a discussion about alternatives to the App Store. Only a few outlets ever press Apple on its suppression of web apps, including failure to add PWA install banners and essential capabilities. It's not an Apple vs. Google horse-race story, and so discussion of power distribution doesn't get coverage.
Settling for occasionally embarrassing Apple into partially reversing its most visibly egregious actions is ethically and morally stunted. Accepting the frame of "who should rule?" that Cupertino reflexively deploys is toxic to any hope of worthwhile technology because it creates and celebrates the idea of kings, leaving us supine relative to the mega-corps in our phones.
This is, in a word, childish.
Adults understand that things are complicated, that even the best intentioned folks get things wrong, or can go astray in larger ways. We build institutions and technologies to protect ourselves and those we love from the worst impacts of those events, and those institutions always model struggles over power and authority. If we are lucky and skilled enough to build them well, the results are balanced systems that attenuate attempts at imposing overt authoritarianism.
In other words, the exact opposite of Apple's infantilising and totalitarian world view.
Instead of debating which wealthy vassals might be more virtuous than the current rulers, we should instead focus on attenuating the power of these monarchical, centralising actors. The DMA is doing this, creating the conditions for interoperability, and through interoperability, competition. Apple know it, and that's why they're willing to pawn their own dignity, along with the rights of fellow Americans, to snuff out the threat.
These are not minor points. Apple has power, and that power comes from its effective monopoly on the APIs that make applications possible on the most important computing platform of our adult lives.
Protecting this power has become an end unto itself, curdling the pro-social narratives Apple takes pains to identify itself with. Any reporter that bothers to do what a scrappy band of web developers have done — to actually read the self-contradictory tosh Apple flings at regulators and legislators around the world — would have been able to pattern match; to see that twisting words to defend the indefensible isn't somehow alien to Apple. It's not even unusual.
But The Verge, 404, and even Wired are declining to connect the dots. If our luminaries can't or won't dig in, what hope do less thoughtful publications with wider audiences have?
Apple's power and profits have made it an enemy of democracy and civic rights at home and abroad. A mealy-mouthed tech press that cannot see or say the obvious is worse than useless; it is an ally in Apple's attempts to obfuscate.
The most important story about smartphones for at least the past decade has been Cupertino's suppression of the web, because that is a true threat to the App Store, and Apple's power flows from the monopolies it braids together. As Cory Doctorow observed:
Apple's story – the story of all centralized, authoritarian technology – is that you have to trade freedom for security. If you want technology that Just Works(TM), you need to give up on the idea of being able to override the manufacturer's decisions. It's always prix-fixe, never a la carte.
This is a kind of vulgar Thatcherism, a high-tech version of her maxim that "there is no alternative." Decomposing the iPhone into its constituent parts – thoughtful, well-tested technology; total control by a single vendor – is posed as a logical impossibility, like a demand for water that's not wet
— Cory Doctorow,
"Plenty of room at the bottom (of the tech stack)"
The Price of High Modernism is Dignity
Doctorow's piece on these outrages is a must-read, as it does what so many in the tech press fail to attempt: connecting patterns of behaviour over time and geography to make sense of Apple's capitulation. It also burrows into the rot at the heart of the App Store: the claim that anybody should have as much power as Apple has arrogated to itself.
We can see clearly now that this micro-authoritarian structure is easily swayed by macro-authoritarians, and bends easily to those demands. As James C. Scott wrote:
I believe that many of the most tragic episodes of state development in the late nineteenth and twentieth centuries originate in a particularly pernicious combination of three elements. The first is the aspiration to the administrative ordering of nature and society, an aspiration that we have already seen at work in scientific forestry, but one raised to a far more comprehensive and ambitious level. “High modernism” seems an appropriate term for this aspiration. As a faith, it was shared by many across a wide spectrum of political ideologies. Its main carriers and exponents were the avant-garde among engineers, planners, technocrats, high-level administrators, architects, scientists, and visionaries.
If one were to imagine a pantheon or Hall of Fame of high-modernist figures, it would almost certainly include such names as Henri Comte de Saint-Simon, Le Corbusier, Walther Rathenau, Robert McNamara, Robert Moses, Jean Monnet, the Shah of Iran, David Lilienthal, Vladimir I. Lenin, Leon Trotsky, and Julius Nyerere. They envisioned a sweeping, rational engineering of all aspects of social life in order to improve the human condition.
— James C. Scott,
"Seeing Like A State"
This is also Apple's vision for the iPhone; an unshakeable belief in its own rightness and transformative power for good. Never mind all the folks that get hurt along the way, it is good because Apple does it. There is no claim more central to the mythos of Apple's marketing wing, and no deception more empowering to abusers of power.4
Apple claims to stand for open societies, but POSIWID shows that to be a lie. It is not just corrupted, but itself has become corrupting; a corrosive influence on the day-to-day exercise of rights necessary for democracy and the rule-of-law to thrive.5
Apple's Le Corbusierian addiction to control has not pushed it into an alliance with those resisting oppression, but into open revolt against efforts that would make the iPhone an asset for citizens exercising their legitimate rights to aid the powerless. It scuttles and undermines open technologies that would aid dissidents. It bends the knee to tyranny because unchecked power helps Cupertino stave off competition, preserving (it thinks) a space for its own messianic vision of technology to lift others out of perdition.
If the consequences were not so dire, it would be tragically funny.
Let's hope our tech press find their nerve, and a copy of “The Open Society and Its Enemies," before we lose the ability to laugh.
Endnote: Let's Talk About Google
I spent a dozen and change years at Google, and my greatest disappointment in leadership over those years was the way the founders coddled the Android team's similarly authoritarian vision.
For the price of a prominent search box on every phone,6 the senior leadership (including Sundar) were willing to sow the seeds of the web's obsolescence, handing untold power to Andy Rubin's team of Java zealots. It was no secret that they sought to displace the web as the primary way for users to experience computing, substituting proprietary APIs for open platforms along the way.
With the growth of Android, Play grew in influence, in part as cover for Android's original sins.7 This led to a series of subtler, but no less effective, anti-web tactics that dovetailed with Apple's suppression of web apps on iOS. The back doors and exotic hoops developers must jump through to gain distribution for interoperable apps remains a scandal.
But more than talking about Google and what it has done, we should talk about how we talk about Google. In specific, how the lofty goals of its Search origins were undercut by those anti-social, anti-user failures in Android and Play.
It's no surprise that Google is playing disingenuous games around providing access to competitors regarding web apps on Android, while simultaneously pushing to expand its control over app distribution. The Play team covet what Apple have, and far from exhibiting any self-awareness of their own culpability, are content to squander whatever brand reputation Google may have left in order to expand its power over software distribution.
And nobody can claim that power is being used for good.
Google is not creating moral distance between itself and Apple, or seeking to help developers build PWAs to steer around the easily-censored channels it markets, and totally coincidentally, taxes.8 Google is Apple's collaborator in capitulation. A moral void, trotting out the same, tired tactic of hiding behind Apple's skirt whenever questions about the centralising and authoritarian tendencies of App Store monopolies crop up. For 15 years, Android has been content to pen 1-pagers for knock-off versions of whatever Apple shipped last year, including authoritarian-friendly acquiescence.
Play is now the primary software acquisition channel for most users around the world, and that should cause our tech press to intensify scrutiny of these actions, but that's not how Silicon Valley's wealth-pilled technorati think, talk, or write. The Bay Area's moral universe extends to the wall of the privilege bubble, and no further. We don't talk about the consequences of enshittified, trickle-down tech, or even bother to look hard at it. That would require using Android and…like…eww.
Far from brave truth-telling, the tech press we have today treats the tech the other half (80%) use as a curio; a destination to gawp at on safari, rather than a geography whose residents are just as worthy of dignity and respect as any other. And that's how Google is getting away with shocking acts of despicable cowardice to defend a parallel proprietary ecosystem of gambling, scams, and shocking privacy invasion, but with a fraction of the negative coverage.
And that's a scandal, too.
FOOTNOTES
Does anyone doubt that Tim Apple's wishlist didn't also include a slap-on-the-wrist conclusion to the US vs. Apple?
And can anyone safely claim that, under an administration as nakedly corrupt as Donald Trump's, Apple couldn't buy off the DOJ? And what might the going rate for such policy pliability be?
That we have to ask says everything. ⇐
It hopes. ⇐
I don't know Wiley Hodges, but the tone of his letter is everything I expect from Apple employees attempting to convince their (now ex-)bosses of anything: over-the-top praise, verging on hagiography, combined with overt appeals to the brand as the thing of value. This is how I understand Apple to discuss Apple to Apple, not just the outside world. I have no doubt that this sort of sickly sweet presentation is necessary for even moderate criticism to be legible when directed up the chain. Autocracies are not given to debate, and Apple is nothing if not internally autocratic.
His follow-up post is more open and honest, and that's commendable. You quickly grasp that he's struggling with some level of deprogramming now that he's on the outside and can only extend the benefit of the doubt towards FruitCo as far as the available evidence allows. Like the rest of us, he's discovering that Apple is demanding far more trust than its actions can justify. He's rightly disappointed that Apple isn't living up to the implications of its stated ideals, and that the stated justifications seem uncomfortably ad hoc, if not self-serving.
This discomfort stems from the difference between principle and PR.
Principles construct tests with which we must wrestle. Marketing creates frames that cast one party as an unambiguous hero. I've often joked that Apple is a marketing firm with an engineering side gig, and this is never more obvious than in the stark differences between communicated choices and revealed outcomes.
No large western company exerts deeper control over its image, prying deep into the personal lives of its employees in domineering ways to protect its brand from (often legitimate) critique that might undermine the message of the day. Every Apple employee not named "Tim" submits to an authoritarian regime all day, every day. It's no wonder that the demands of power come so easily to the firm. All of this is done to maintain the control that allows Marketing to cast Apple's image in a light that makes it the obvious answer to "who should rule?"
But as we know, that question is itself the problem.
Reading these posts, I really feel for the guy, and wish him luck in convincing Apple to change course. If (as seems likely) it does not, I would encourage him to re-read that same Human Rights Policy again and then ask: "is this document a statement of principle or is it marketing collateral?" ⇐
The cultish belief that "it is good because we do it" is first and foremost a self-deception. It's so much easier to project confidence in this preposterous proposition when the messenger themselves is a convert.
The belief that "we should rule" is only possible to sustain among thoughtful people once the question "who should rule?" is deeply engrained. No wonder, then, that the firm works so dang hard to market its singular virtue to the internal, captive audience. ⇐
As I keep pointing out, Apple can make different choices. Apple could unblock competing browsers tomorrow. It could fully and adequately fund the Safari team tomorrow. It could implement basic features (like install banners) that would make web apps more viable tomorrow. These are small technical challenges that Apple has used disingenuous rhetoric to blow out of all proportion as it has tried to keep the web at bay. But if Apple wanted to be on the side of the angels, it could easily provide a viable alternative for developers who get edited out of the App Store. ⇐
Control over search entry points is the purest commercial analogue in Android to green/blue messages on iOS. Both work to dig moats around commodity services, erecting barriers to switching away from the OS's provider, and both have been fantastically successful in tamping down competition. ⇐
It will never cease to be a scandal that Android's singular success metric in the Andy Rubin years was “activations.” The idea that more handsets running Android was success is a direct parallel to Zuckian fantasies about “connection” as an unalloyed good.
These are facially infantile metrics, but Google management allowed it to continue well past the sell-by date, with predictably horrendous consequences for user privacy and security. Play, and specifically the hot-potato of "GMS Core" (a.k.a., "Play Services") were tasked with covering for the perennially out of date OSes running on client devices. That situation is scarcely better today. At last check, the ecosystem remains desperately fragmented, with huge numbers of users on outdated and fundamentally insecure releases. Google has gone so far as to remove these statistics from its public documentation site to avoid the press asking uncomfortable questions. Insecurity in service of growth is Android's most lasting legacy.
Like Apple, Andy Rubin saw the web as a threat to his growth ambitions, working to undermine it as a competitor at every step. Some day the full story of how PWAs came to be will be told, but suffice to say, Android's rabid acolytes within the company did everything they could to prevent them, and when that was no longer possible, to slow their spread. ⇐
Don't worry, though, Play doesn't tax developers as much as Apple. So Google are the good guys. Right?
Right? ⇐
11ty Hacks for Fun and Performance
This blog really isn't just for beating up on Apple for the way it harms users, the web, standards, and society to maintain power and profits. So here's some fun stuff I've been doing in my 11ty setup to improve page performance.
Page-Specific Resources via Shortcodes and the 11ty Bundler
You know how it gets once you've got a mature 11ty setup: shortcodes proliferate, and some generate output that might depend on JS or CSS.
If your setup is anything like mine, you might also feel conflicted about including some of those resources globally. It's important to have them available for the posts that need them, but it's not great to bring a charting library, e.g., into every page when only 1 in 20 will need it.
This blog has grown many such shortcodes that generate expansions for features like:
-
plot.jsbased charts
Here, for instance, is how a Vimeo embed looks in the markdown of a page, using Nunjucks for Markdown pre-processing:
{% vimeo "VIDEOID", "TITLE" %}
This expands to:
<lite-vimeo
videoid="VIDEOID"
videotitle="TITLE">
<a href="http://vimeo.com/VIDEOID">TITLE</a>
</lite-vimeo>
But this also requires script; e.g.:
<script type="module" async
src="/assets/js/lite-vimeo/lite-vimeo.js">
</script>
And the list keeps growing. To avoid having the weight of these components clogging up every page, a system for selectively pulling in their code would be helpful.
It'd also be grand if we could make sure scripts appear just once, even if shortcodes are invoked multiple times per page. Included code should preferably also be located towards the top of the document.
The 11ty Bundle plugin to the rescue!
Sort of.
At first glance, the 11ty Bundle plugin looks ideal for this, but we need to solve two problems the documentation doesn't cover:
-
Shortcodes should auto-include the scripts they need, but not over-include them. How can we use Bundle plugin provided shortcodes from within other shortcodes? This is important to avoid having every page needing to remember to include scripts.
-
How to make this work with templates that use pagination?
The first problem turns out to be very simple because 11ty shortcodes are themselves callable functions. Somewhere in my .eleventy.js configuration, there's now a function like:
function addToBundle(scope, bundle, code) {
eleventyConfig.getPairedShortcode(bundle)
.call(scope, code);
}
This works because 11ty's internals are dogmatic about not binding this, allowing tools like shortcodes to be regular 'ole functions that can be invoked from dynamic scopes via call() and apply(). This is awesome!
The Vimeo shortcode then calls addToBundle to make sure that the JS it depends on will get loaded:
eleventyConfig.addShortcode("vimeo",
function(id, title="") {
addToBundle(this, "js", `
<script type="module" async
src="/assets/js/lite-vimeo/lite-vimeo.js">
</script>
`);
let titleAttr = title ? ` videotitle="${title}" ` : "";
return `
<lite-vimeo videoid="${id}" ${titleAttr}>
<a href="http://vimeo.com/${id}">${title}</a>
</lite-vimeo>
`;
});
The real code is a tad more complex to handle things like proper escaping, script file versioning, stripping whitespace to avoid markdown issues, etc…but not much. Page templates then include the usual:
<!DOCTYPE html>
<html>
<head>
<!-- ... -->
{% getBundle "js" %}
<!-- ... -->
</head>
<!-- ... -->
And this should be it!
Mo Pagination, Mo Problems
Except it didn't work on my homepage.
Why not? Mostly because the 11ty Bundle plugin was built for sane sites; projects where a single output page's content doesn't need to hoist bundle inputs from across multiple entries. But I'm using pagination, like a mug.
I've got a patch out that addresses this, hackily, and for now I'm using package.json overrides to target that branch. It seems to be working well enough here:
{
"name": "infrequently.org",
// ...
"devDependencies": {
"@11ty/eleventy": "3.1.2",
"@11ty/eleventy-plugin-rss": "^2.0.4",
"@11ty/eleventy-plugin-syntaxhighlight": "^5.0.2",
// ...
},
"overrides": {
"@11ty/eleventy-plugin-bundle":
"https://github.com/slightlyoff/eleventy-plugin-bundle-pagination.git#pagination-aware"
},
// ...
}
Now I can build shortcodes that use other shortcodes to bundle code only for the pages that need it, trimming the default JS payload while leaving me free to build and use richer components as necessary.
Build Time Impact
The one downside of this approach has been an increase in build times.
I've worked to keep full re-builds under five seconds, and incremental builds under a second on my main writing device (a recent Chromebook). The Bundle plugin adds a post-processing phase (via @11ty/eleventy/html-transformer) to builds which tack on two seconds to both scenarios. This blog generates ~1,500 pages in a build, so the per page hit isn't bad, but it's enough to be noticeable.
I will likely spend time getting this trimmed back down in the near future. If you've got a smaller site, I can recommend the bundler-with-generative-shortcodes approach. If your site is much larger, it may be worth adopting if you're already paying the price of a post-processing step. Otherwise, and as ever, it's worth measuring.
Scroll-Position Based Delayed Code Loading
Loading code only on the pages that need it is great, but you know what's even better? Only loading code when it's going to actually be needed.
For a lot of pages, it makes sense to load specific widgets only when users scroll down far enough to encounter their content. Normally this is the sort of thing folks lean on big frameworks to handle, but that's not how this blog rolls. Instead, we'll use an IntersectionObserver and a MutationObserver to:
-
Locate scripts that want deferred loading.
-
Use attributes on those elements to identify which elements they should be invoked for.
-
Watch the page scrolling and trigger the code loading when target elements get close enough to the viewport.
Taken together, this reduces code loaded up front that might otherwise contend with above-the-fold resources without sacrificing interactive features further down the page.
Here, for instance, was how some code from a recent blog post that needed charts looked before:
{% js %}
<script type="module"
src="/assets/js/d3.min.js"></script>
<script type="module"
src="/assets/js/plot.min.js"></script>
<script type="module">
// ...
genFeatureTypePlot("leading", true, {
caption: "Chromium launches ... ahead of other engines."
});
// ...
</script>
{% endjs %}
#### Leading Launches by Year
<div id="leading"></div>
The Bundle plugin makes it simple to write code near the elements that will target it and not worry about duplicates. That's helpful, and the bundle plugin provides many tools for choosing where to output the gathered-up bits, but what I really want is to delay the fetching of those scripts until the user might plausibly benefit from them.
Here's what the revised code looks like:
{% js %}
<script type="io+module" data-for="leading"
src="/assets/js/d3.min.js"></script>
<script type="io+module" data-for="leading"
src="/assets/js/plot.min.js"></script>
<script type="io+module" data-for="leading">
// ...
genFeatureTypePlot("leading", true, {
caption: "Chromium launches ... ahead of other engines."
});
// ...
</script>
{% endjs %}
#### Leading Launches by Year
<div id="leading"></div>
These scripts will still be hoisted into the <head>, but they won't execute because they are using a type attribute the browser doesn't recognise. The data-for attribute provides an ID of the element to trigger loading of the script. These are enough to build a scroll-based loader with.
Our loader uses a bit of inline'd script at the top of the page to set up an IntersectionObserver to watch scrolling, and a collaborating MutationObserver to identify elements matching this description as the parser creates them.
Here's the meat of that snippet, loaded early in the document as a <script type="module">. Pardon the pidgin JavaScript style; the last thing I want is a JS transpiler as part of builds, so bytes matter:
// Utilities to delay code until the next task
let rAF = requestAnimationFrame;
let doubleRaf = (func) => {
rAF(()=> { rAF(()=> { func(); }) });
};
// Bookkeeping
let ioScripts = new Set();
let ioScriptsFor = new Map();
let triggerIDs = new Set();
// Record which scripts should wait on elements
let processScript = (s) => {
if(ioScripts.has(s)) { return; }
ioScripts.add(s);
let idFor = s.getAttribute("data-for");
let isf = ioScriptsFor.get(idFor);
if(!isf) {
isf = [];
ioScriptsFor.set(idFor, isf);
triggerIDs.add(idFor);
}
isf.push(s);
};
// Handle existing scripts before setting up
// the Mutation Observer
document.querySelectorAll(`script[type="io+module"]`)
.forEach(processScript);
// Preload element tempalate;
let plt = document.createElement("link");
plt.setAttribute("rel", "modulepreload");
// For a given element, begin loading scripts
// that were waiting on it.
let triggerScripts = async (id) => {
let scripts = ioScriptsFor.get(id);
let head = document.head;
for(let s of scripts) {
if(!s.src) { continue; }
// Get a preload request started
let pl = plt.cloneNode(true);
pl.setAttribute("href", s.src);
head.append(pl);
}
for(let s of scripts) {
// Clone because setting type alone does not
// trigger evaluation.
let sc = s.cloneNode(true);
// Set type to an executable value
sc.setAttribute("type", "module");
if(sc.src) { // External scripts
let lp = new Promise((res) => {
sc.onload = res;
});
head.append(sc);
await lp; // Serialisation handled above
} else { // Inline modules
head.append(sc);
}
}
};
let forObs = new IntersectionObserver(
(entries) => {
entries.forEach((e) => {
if(e.intersectionRatio > 0) {
forObs.unobserve(e.target);
doubleRaf(() => {
triggerScripts(e.target.id);
});
}
});
},
{
// If we boot far down the page, e.g. via back
// button scroll restoration, load eagerly.
// Else, watch two screens ahead:
rootMargin: "1000% 0px 200% 0px",
}
);
// When new elements are added, watch for scripts
// with the right type and elements that scripts are
// waiting on.
let documentMo = new MutationObserver((records) => {
for(let r of records) {
if(!r.addedNodes) { continue; }
for(let n of r.addedNodes) {
if(n.nodeType === 1) { // Elements only
if((n.tagName === "SCRIPT") &&
(n.type === "io+module")) {
processScript(n);
}
// If we find elements in the watch list,
// observe their scrolling relative to
// the viewport
if(n.id && triggerIDs.has(n.id)) {
forObs.observe(n);
}
}
}
}
});
documentMo.observe(document.documentElement, {
childList: true,
subtree: true,
});
That's the whole job done in just 110 lines of modern platform code, including utility functions and 20 lines of comments.
A few small tricks of note:
-
<link rel="modulepreload" href="...">allows us to get network requests started without waiting on previous scripts to download, parse, and evaluate. -
Limiting support to modules enables us to get async, but ordered, execution semantics.
-
The early
querySelectorAll()ensures hoisted script blocks that occur before the inline'd script are handled correctly.
As this was enough for the moment, I haven't implemented a few obvious improvements:
-
This technique could be combined with shortcodes-calling-the-bundler to create shortcodes that dynamically load their code based on scroll position and only on pages that need them.
-
The
id-based system is a bit fugly and can easily be upgraded to use any simple CSS selector thatmatches()supports.
Faster CSS Selectors for 11ty Syntax Highlighting
It's a small thing, but I do try to optimise the CSS selectors used on this blog, as it's element-heavy and doesn't encapsulate much of the style-recalculation time work with Shadow DOM.
It was something of a surprise, then, to find that use of the @11ty/eleventy-plugin-syntax-highlight module's use of some basic styles cargo-culted many years ago was tanking style recalculation performance. How? Slow attribute rules:
code[class*="language-"],
pre[class*="language-"] {
color: #f8f8f2;
background: none;
text-shadow: 0 1px rgba(0, 0, 0, 0.3);
/* ... */
}
Thanks to the prevalence of inline <code> blocks in my writing, the Selector Stats panel showed a fair few slow-path misses.
Thankfully, the fix is simple. In the CSS I switched to faster whole-attribute selectors:
code[highlighted],
pre[highlighted] {
color: #f8f8f2;
background: none;
text-shadow: 0 1px rgba(0, 0, 0, 0.3);
/* ... */
}
Next, I used the configuration options available in the plugin (which I only figured out from reading the source) to specify those attributes be added to the <pre>s and <code>s generated by the syntax highlighter:
import syntaxHighlight from
"@11ty/eleventy-plugin-syntaxhighlight";
// ...
eleventyConfig.addPlugin(syntaxHighlight, {
// For faster CSS selectors
preAttributes: { highlighted: "highlighted" },
codeAttributes: { highlighted: "highlighted" },
});
Now the browser doesn't have to attempt a slow substring search every time it encounters one of these elements.
There's more to do in terms of selector optimisation on this site, but this was a nice quick win.
Lean Rada's CSS-only Low-Quality Image Previews
The history is boring, but suffice to note that the <img> helpers this site uses predate the official 11ty Image transform plugin, and are tuned to generate URLs that work with Netlify's Image CDN. This means I've also been responsible for generating my own previews for images that haven't loaded yet, and devising strategies for displaying and animating them.
This has been, by turns, fun and frustrating.
The system relies on (you guessed it) shortcodes which both produce a list of images without previews and consume cached preview data to inline the scaled-down versions. Historically this worked by using sharp to generate low-res WebP (then AVIF) base64-encoded values that got blurred. I played a bit with BlurHash and ThumbHash, but the need for a <canvas> element was unattractive.
A better answer would have relied on CSS custom paint, but between Safari being famously rekt (and representing large fraction of this blog's readership), and the Paint Worklet context missing the ImageData() constructor, it never felt like a workable approach.
But as of this year, there's a new kid in town: Lean Rada's badass CSS-only LQIP approach.
That system is now implemented, which does a lot to shrink the HTML payloads of pages, as well as speeding up raster of the resulting image previews. This is visible in detailed traces, where the layout phase no longer has to wait for a background threads to synchronously decode image literals.
It took a few weekends of playing around to get it going correctly, as the code linked from Lean's blog post is not what he's using now. The colourspace conversion code they're using is also inaccurate, and so attempts to replace it with color-space produced visually incorrect results. Using the exact code they do for RGB-to-Lab conversion is necessary to generate the correct effect, and dialling in those differences was time-consuming.
Happy to make the code I use for this available upon request, but it's not amazing, and you really should go read Lean's blog post for yourself. It's a masterpiece.
Bonus Hack: Global Acronyms for Better Abbreviations
I recently added markdown-it-abbr to my configuration to make some technical writing a bit more accessible. Across a few posts, this ended up with a lot of repetition for terms like “W3C”, “IETF”, etc.
This was both a bit time-consuming and error-prone. What if, I wondered, it were possible to centralise them?
Turns out it's trivial! My setup pre-processes markdown as Nunjucks (via markdownTemplateEngine: "njk"), which makes the full range of directives available, including…include.
This means I can just create a single file with commonly used acronyms and include it from every page; the physical location is _includes/acronyms.md:
{% include "acronyms.md" %}
This doesn't improve performance, but has been hugely helpful for consistency.
Does It Work?
Putting it all together, what do we get?
On a page which previously saw contention between scripts for charts and above-the-fold fonts and initial layout work, the wins have been heartening on a low-spec test bench:
The long tent pole is now Netlify's pokey connection setup and TTFB, and I'm content (if not happy) with that.
Apple's Antitrust Playbook
This blog is failing on several levels. First, September 2025 is putting the “frequent” in “infrequently”, much to my chagrin. Second, my professional mission is to make a web that's better for everyone, not to tear Apple down.
But just when I thought I was out, they pull me back in.
Over the past 24 hours, Cupertino has played the hits from its iPhone-era anti-user, anti-business, anti-rule-of-law power trip. To recap:
-
Apple sent a demand that the EU give up on DMA enforcement in a petulant and utterly misleading press release that mostly served to catalogue Apple's own failures to comply.
-
Cupertino's kept astroturf outfit — ACT | The App Association — penned unhinged misinformation for consumption by US regulatory types (h/t Damien Geradin).
-
The EC then called Apple's bluff.
Make no mistake about what's going on: Apple is claiming that the EU is forcing Apple to adopt interpretations of the DMA that no other party has, which the EC itself has not backed, and which are “forcing” Apple to avoid shipping features in the EU.
Or, put another way, Apple wants to launder the consequences of its own anticompetitive, anti-user choices through a credulous tech press. The goal is to frame regulators for Apple's own deeds.
Lies by Any Other Name
If this smells like bullshit, that's because it is.
Apple tried similar ploys last year when it sprung a blatant attempt to kill PWAs on the EU as a price for having the temerity to regulate. The plan, it seemed then, was to adopt an exotic reading of the DMA and frame the EC for killing PWAs.
What Apple's attempting now is even more brazen, particularly after 16+ months of serial non-compliance since the DMA came into force (h/t OWA).
Remember that:
-
Apple's browser, and by extension every iOS browser, fails on basic security and privacy controls, putting users at risk constantly. Apple serially misrepresents this situation to anyone who will listen.
-
Cupertino has constructed a legal (PDF) and technical thicket that prevents competitors from building better browsers, including:
-
Geofencing choice to the EU.
-
Forcing users to download separate browsers to access better versions (should they appear).
-
Withholding PWA and Push Notification APIs from competitors.
-
Providing a busted SDK that included multiple show-stopping bugs.
-
Making the process of acquiring and using the necessary entitlements for development nearly impossible.
-
Gaslighting the EC and the press at every turn; simultaneously claiming it is complying, while complaining loudly that it has to do work to implement the wholly unnecessary hurdles to compatibility that Apple itself created.
-
-
Apple have used every opportunity to rubbish both the spirit and letter of the law, spreading outlandish claims about user harms. These claims keep getting rejected whenever they're weighed by regulators and independent third parties.
All of this to prevent the web from threatening the App Store, via the DMA.
Add today's shake-down, and we see an assault on the rule of law. Tim's appearance in The Oval bearing gaudy-ass trinkets makes sense as a kickback for services rendered against the idea of laws. Or at least laws that bind those with power the way they constrain everyone else.
This is not a one-off or a fluke.
The name of the game is delay, and Apple is using the same tactics it has practised to maintain profits through outrageous attacks on the right-to-repair and sensible anti-e-waste regulation. When you're the incumbent, delay is winning. Apple has pulled out all the stops to prevent the web from providing a safer, more private alternative to native apps, winning long reprieves thus far. And you should see what they did on the native side. phew.
This is extremely embarrassing for the EU, which has attempted to respond thoughtfully and reasonably to every provocation. But appeasement of Cupertino isn't working.
Great Artists Steal
Apple didn't author the playbook it's using now; it has been developed and honed for 40 years by toxic industries trying feverishly to escape the consequences of their actions. Most famously, the tactics Apple is gleefully employing served Big Oil and Big Tobacco extremely well:
-
Lobbying (legalised corruption).
This works a shockingly large fraction of the time, and was sufficient to kill effective regulation of the smartphone ecosystem in the US during Biden's term, despite overwhelming evidence from the NTIA's exhaustive report.
If first-party lobbying hits roadblocks — e.g., being absolutely outclassed by civil society groups that see the real threats and costs of your actions — there are alternatives, including...
-
Astroturf.
If "ACT" rings a bell, it might be for the role it played in the '90s in the (hollow) defence of Microsoft.
These days, it's a bought-and-paid-for megaphone for Apple, and it is working to worm its way into EU policy circles too. For a flavour of the Klein bottle arguments it serves up regularly, imagine arguing that more choice than a single App Store would be bad because it would hurt small developers. 🤯
-
Market false comparison points.
Apple has a point about privacy and security when it comes to Android, e.g., but why is that our comparison point? Why is the App Store's historic failure to safeguard anyone from anything (by comparison to browsers) our collective counterfactual? It's nonsense. Hot tosh. But it's marketed with gusto and zeal in the hopes that nobody will notice. And the tech press, to their everlasting discredit, have more than played along.
When that doesn't work...
-
Perform petulance.
If a regulator or government has the stones to stand up for its own democratic polity in a way that attacks your profit potential, go to the matresses. Let everyone know they'll be sorry. Throw as big a hissyfit as you can manage, then make the same false claims over and over.
To make obvious falsehoods stick, shameless repetition, at volume, is a must.
-
Market your "compliance."
But under no circumstances, actually comply.
The trick here is to performatively roll out processes and press briefings telling everyone how seriously you take all of this while actively gumming up the works. Apple have done this on the regular since the DMA's passage, and if that rhymes with various ongoing plots against right-to-repair after public claims to support it...well...
-
Claim law hurts kittens, blame regulators
Doesn't have to be kittens, obviously. Jobs, privacy, apple pie...anything that seems sympathetic will do. Just make fig-leaf claims to support your side and lean heavily into "won't someone think of the kids?"
Assuming your bought-and-paid-for astroturfers and docile access-journalism reporters are still on speed dial,
GOTO 2, safe in the assumption that regulators won't do much.
Honestly, it's a winning formula, assuming you have more money than Croesus and lawyers willing to make claims they know to be untrue. All they better if they're so well paid that they can't be bothered to check.
It only falls apart if the folks whose job it is to ferret out the truth, and the people whose role is to stand up for citizens against vested interests, do their jobs.
Dear Tech Reporters: Access Is Not A Beat
At the risk of stating the obvious, it does not matter what your relationship to Cupertino is. It's in your head. They threaten to call your boss? Or to stop talking to you? So what? They always do that. To everyone. All the time. Ask anyone. There's no upside, and you can never be on the inside. You were always being managed.
Cupertino's gonna threaten to stop running ads on your site? Print that.
Where are your stones? Why did you get into journalism? Or technology? Or technology journalism? What, would you say, 'ya do here?
The obvious stenography around competition issues is mind-numbing when the theory of the case is so bloody simple. Apple and Google want everyone in their app stores because that's how they maintain power, and through that power, profits. They have arranged things such that being in the App Store is the only way developers get access to critical APIs, and those APIs are the only way to make functional apps.
Apple's fighting real browsers and the DMA because they're a threat to that model. Browsers, and the PWAs they enable, are the open, tax-free alternative to app stores, and the Duopolists are extremely displeased that they exist, even in the degraded form they currently allow.
This is simple. Obvious. Incredibly transparent.
But almost nobody will connect the dots in print. And that's a scandal, too.
Comforting Myths
In several recent posts, I've attempted to address how the structure of standards bodies, and their adjacent incubation venues, accelerates or suppresses the potential of the web as a platform. The pace of progress matters because platforms are competitions, and actors that prevent expansions of basic capabilities risk consigning the web to the dustbin.
Inside that framework, there is much to argue over regarding the relative merits of specific features and evolutionary directions. This is healthy and natural. We should openly discuss features and their risks, try to prevent bad consequences, and work to mend what breaks. This competitive process has made browsers incredibly safe and powerful for 30 years.
Until iOS, that is.
Imagine my surprise upon hearing that Apple isn't attempting to freeze the web in amber, preserving advantages for its proprietary platform, and that it instead offers to redesign proposals it disagrees with.
As I have occasionally documented, this has not been my experience. I have relatively broad exposure to the patterns of Apple's collaboration, having designed, advised on, or led teams that built dozens of features across disparate areas of the platform since the Blink fork.1
But perhaps this was the wrong slice from which to judge? I've been hearing of Apple's openness to collaboration on challenging APIs so often that either my priors are invalid, or something else is at work. To find out, I needed data.
Background
A specific parry gets deployed whenever WebKit's sluggish feature pace comes up: “controversial” features “lack consensus” or “are not standards” or “have privacy and security problems” (unspecified). The corollary being that Apple engages in good-faith to address these developer needs in other ways, even in areas where they have overtly objected.
Apple's engine has indisputably trailed Blink and Gecko in all manner of features over the past decade. This would not be a major problem, except that Apple prevents other browsers from delivering better and more competitive web engines on iOS.
Normally, consequences for not adopting certain features arrive in the market. Browsers that fail to meet important needs, or drop the ball on quality lose share. This does not hold on iOS because no browser can ship a less-buggy or more capable engine than Apple's WebKit.
Because competitors are reduced to rebadging WebKit, Apple has created new responsibilities and expectations for itself.2 Everyone knows iOS is the only way to reach wealthy users, and no browser can afford to be shut out of that slice of the mobile market. Therefore, the quality and features of Apple's implementation matter greatly to the health and competitiveness of the web.
This put's Apple's actions squarely in the spotlight.
Is Apple Engaged In Constructive API Redesign?
It's possible to size up Apple's appetite for problem-solving in several ways. We can look to understand how frequently Apple ships features ahead of, or concurrently with, other engines because near-simultaneous delivery is an indicator of co-design. We can also look for visible indications of willingness to engage on thorny designs, searching for counter-proposals and shipped alternatives along the way.
General Trends
This chart tracks single-engine omissions over the past decade; a count of designs which two engines have implement but which a single holdout prevents from web-wide availability:
Safari consistently trails every other engine, and APIs missing from it impact every iOS browser.
Thanks to the same Web Features data set, many views are possible. This data shows that there are currently 178 features in Chromium that are not available in Safari, and 34 features in Safari that are not yet in Chromium. (or 179 and 37 for mobile, respectively). But as I've noted previously, point-in-time evaluations may not tell us very much.
I was curious about delays in addition to omissions. How often do we see evidence of simultaneous shipping, indicating strong collaboration? Is that more or less likely than leading vendors feeling the need to go it alone, either because of a lack of collaborative outreach, or because other vendors do not engage when asked?
To get a sense, I downloaded all the available data (JSON file), removed features with no implementations, removed features introduced before 2015, filtered to Chrome, Safari, and Firefox, then aggregated by year. The resulting data set is here (JSON file).
The data can't determine causality, but can provide directional hints:
Leading Launches by Year
Features Shipped Within One Year of the Leader
Features Shipped Two+ Years After the Leader
Safari rarely leads, but that does not mean other vendor's designs will stand the test of time. But if Apple engages in solving the same problems, we would expect to see Safari leading on alternatives3 or driving up the rates of simultaneously shipping features once consensus emerges. But these aren't markedly increased. Apple can, of course, afford to fund work into alternatives for “problematic” APIs, but it doesn't seem to.
Narratives about collaboration in tricky areas take more hits from Safari's higher incidence of catch-up launches. These indicate Apple shipping the same design that other vendors led with, but on a delay of two years or more from their first introduction. This is not redesign. If there were true objections to these APIs, we wouldn't expect to see them arrive at all, yet Apple has done more catching up over the past several years than it has shipped APIs with other vendors.
This fails to rebut intuitions developed from recent drops of Safari features (1, 2) composed primarily of APIs that Apple's engineers were not primary designers of.
But perhaps this data is misleading, or maybe I analysed it incorrectly. I have heard allusions to engagement regarding APIs that Apple has publicly rejected. Perhaps those are where Cupertino's standards engineers have invested their time?
Hard Cases
Most of the hard cases concern APIs that Apple (and others) have rightly described as having potentially concerning privacy and security implications. Chromium engineers agreed those concerns have merit and worked to address them; we called it “Project Fugu” for a reason. In addition to meticulous design to mitigate risks, part of the care taken included continually requesting engagement from other vendors.
Consider the tricky cases of Web MIDI, Web USB, and Web Bluetooth.
Web MIDI
Apple has supported MIDI in macOS for at least 20 years — likely much longer — and added support for MIDI on iOS with 2010's introduction of Core MIDI in iOS 4.2. By the time the first Web MIDI proposals broke cover in 2012, MIDI hardware and software were the backbone of digital music and a billion dollar business; Apple's own physical stores were stocking racks of MIDI devices for sale. Today, an overwhelming fraction of MIDI devices explicitly list their compatibility with iOS and macOS.
It was therefore a clear statement of Apple's intent to cap web capabilities when it objected to Web MIDI's development just before the Blink fork. The objections by Apple were by turns harsh, condescendingly ignorant and imbued with self-fulfilling stop-energy; patterns that would repeat post-fork.
After the fork and several years of open development (which Apple declined to participate in), Web MIDI shipped in Chromium in early 2015. Despite a decade to engage, Safari has not shipped Web MIDI, Apple has not provided a “standards position” for it4, and has not proposed an alternative. To the best of my knowledge, Apple have also not engaged in conversations about alternatives, despite being a member of the W3C's Audio Working Group which has published many Working Drafts of the API. That group has consistently included publication of Web MIDI as a goal since 2012.
Across 11 charters or re-charters since then, I can find no public objection within the group's mailing list from anyone with an @apple.com email address.5 Indeed, I can find no mentions of MIDI from anyone at Apple on the public list. Obviously, that is not the same thing as agreeing to publication as a Recommendation, but it also not indicative of any attempts at providing an alternative.
But perhaps alternatives emerged elsewhere, e.g., in an Incubation venue?
There's no counter-proposal listed in the WebKit explainers repository, but maybe it was developed elsewhere?
We can look for features available behind flags in Safari Technology Preview and read the Tech Preview release notes. To check them, I used curl to fetch each of the 127 JSON files that are, apparently, the format for Safari's release notes, pretty-printed them with jq, then grepped case-insensitively for mention of “audio” and “midi”. Every mention of “audio” was in relation to the Web Audio API, the Web Speech API, WebRTC, the <audio> element, or general media playback issues. There were zero (0) mentions of MIDI.
I also cannot locate any public feedback on Web MIDI from anyone I know to have an @apple.com email address in the issue tracker for the Web Audio Working Gropup or in WICG except for a single issue requesting that WICG designs look “less official.”
The now-closed Web MIDI Community Group, likewise, had zero (0) involvement by Apple employees on its mailing list or on the successor Audio Community Group mailing list. There were also no (0) proposals covering similar ground that I was able to discern on the Audio CG issue list.
Instead, Apple have issued a missive decrying Web MIDI as a privacy risk. As far as anyone can tell, this was done without substantive analysis or engagement with the evidence from nearly a decade of deploying it in Chromium-based browsers.
If Apple ever offered an alternative, or to collaborate on a redesign, or even an evidence-based case for opposing it,6 I cannot find them in the public record.7
Web USB
USB is a security sensitive API, and Web USB was designed with those concerns in mind. All browsers that ship Web USB today present “choosers” that force users to affirmatively select each device they provide access to, from which sites, and always show ambient usage indicators that let users revoke access. Further, sensitive device classes that are better covered by more specific APIs (e.g., the Filesystem Access API instead of USB Mass Storage) are restricted.
This is far from the Hacker News caricature of "letting any web page talk to your USB devices," allowing only point connections between individual devices and sites, with explicit controls always visible.
After two years of public development and a series of public Origin Trials lasting seven months (1, 2), the first version of the API shipped in Chrome 61, released September 2017.
I am unable to locate any substantive engagement from Apple about alternatives for the motivating use-cases outlined in the spec.
With many years of shipping experience, we can show that these needs have been successfully addressed by WebUSB; e.g. teaching programming in classrooms. More than a decade after it was first approached about them, it's unclear what Apple's alternative is. Hollowing out school budgets to buy Cupertino's high-end devices to run unsafe, privacy-invading native apps?
Apple have included Web USB on the list of APIs they “decline to implement” and quite belatedly issued a “standards position” opposing the design. But no alternative proposal was developed or linked from those threads, despite being asked directly if there might be more palatable alternatives.
I can locate no appetite from Apple's standards engineers to address these use-cases, know of no enquiries into data about our experiences shipping them, and can find no constructive counterproposals. Which raises the obvious question: if Apple does engage to develop counterproposals in tricky areas, how long are counterparties meant to wait? More than eight years?
Web Bluetooth
Like Web USB, Web Bluetooth was designed from the ground-up with safety in mind and, as a result, has been incredibly safe and deployed at massive scale for eight years. It relies on the same chooser model in Chromium-based browsers.
As with all Project Fugu device APIs, Web Bluetooth was designed to reduce ambient risks — no access to Bluetooth Classic, available only on secure sites, and only in <iframe>s with explicit delegation, etc. — and to give implementers flexibility about the UIs they present to maximise trust and minimise risk. This included intentionally designing flexibility for restricting access based on context; e.g., only from installed PWAs, if a vendor chooses that.
The parallels with Web USB continue on the standards track. I can locate no engagement from any @apple.com or @webkit.org email addresses on the public-web-bluetooth mailing list. In contrast, when design work began in 2014 and every browser vendor was invited to participate, Mozilla engaged. I can find no evidence of similar openness on the part of Apple, nor constructive counter-proposals.
Over more than three years of design and gestation in public, including very public Origin Trials, Apple did not provide constructive feedback, develop counter-proposals, or offer to engage in any other way I can find.
This appears to be a pattern.
From a deep read of the “standards position” threads for designs Apple opposes, I cannot find evidence that Cupertino has ever offered a constructive counter-proposal for any API it disfavours.
These threads do demonstrate downplaying clearly phrased developer needs, rather than proactive engagement, and it seems the pattern is that parties must beg Apple to belatedly form an opinion. When there is push-back, often after years of radio silence, requesters (not Apple) also have to invent potential alternatives, which Apple may leave hanging without engagement for years.
Worse, there are performative expressions of disinterest which Apple's standards engineers know are in bad-faith. An implementer withholding engagement from a group, then claiming a lack of implementer engagement in that same venue as a reason not to support a design, is the sort of self-serving, disingenuous circularity worthy of disdain.
Overall Impressions
Perhaps both the general trends and these specific high-profile examples are aberrant. Perhaps Apple's modus operandi isn't to:
- Ignore new incubations, even when explicitly asked to participate.
- Fail to register concerns early and collaboratively, where open design processes could address them.
- Force web developers and other implementers to request “positions” at the end of the design process because Apple's disengagement makes it challenging to understand Cupertino's level of support (or antipathy).
Mayhaps it's not simply the predicable result of paltry re-investments in the web by a firm that takes eye watering sums from it.
If so, I would welcome evidence to that effect. But the burden of proof no longer rests with me.8
What's Happening Here?
It's hard to say why some folks are under the impression that Apple are generous co-designers, or believe Apple's evocative statements about hard-case features are grounded in analysis or evidence. We can only guess at motive.
The most generous case I can construct is that Apple's own privacy and security failures in native apps have scared it away, and that spreading FUD is cover for those sins. The more likely reality is that upper management fears PWAs and wants to keep them from threatening the App Store with safe alternatives that don't require paying Apple's vig.
Whatever the cause, the data does not support the idea that Apple visibly engages in constructive critique or counter-proposal in these areas.
Moreover, it shows that many of Apple's objections and delays were unprincipled. It should be every browser's right to control the features it enables, and Cupertino is entirely within those rights to avoid shipping features in Safari. But the huge number of recent “catch-up” features tells a story that aligns more with covering for embarrassing oversights, rather than holding a line on quality, privacy, or security.
On the upside, this suggests that if and when web developers press hard for capabilities that have been safe on other platforms, Cupertino will relent or be regulated into doing so. It scarcely has a choice while simultaneously skimming billions from the web and making arguments like these to regulators (PDF, p35):
The moment iPhone users around the world can install high-quality browsers, the conversational temperature about missing features and reliability will drop considerably. Until then, it remains important that Apple bear responsibility for the problems Apple is causing not only for Apple, but for us all.
FOOTNOTES
My various roles since the Blink fork have included:
- TC39 representative
- Co-designer of Service Workers ('12-'15)
- Co-Tech Lead for Project Fizz ('14-'16)
- Three-time elected member of the W3C's Technical Architecture Group ('13-'19)
- Web Standards Tech Lead for Chrome ('15-'21)
- Co-Tech Lead for Project Fugu ('17-'21)
- Co-designer of Isolated Web Apps ('19-'21)
- Blink API OWNER ('18-present)
- Ongoing advisor to Edge's web standards team
In my role as TAG member, Fizz/Fugu TL, and API OWNER I've designed, reviewed, or provided input on dozens of web APIs. All of this work has been collaborative, but these positions have given me a nearly unique perch from to observe the ebb and flow of new designs from, particularly on the "spicy" end of the spectrum. ⇐
Apple has many options for returning voluntarity to the market for iOS browsers.
Most obviously, Apple can simply allow secure browsers to use their own engines. There is no debate that this is possible, that competitors generally do a better job regarding security than Apple, and that competitors would avail themselves of these freedoms if allowed.
But Apple has not allowed them.
Open Web Advocacy has exhaustively documented the land mines that Apple has strewn in front of competitors that have the temerity to attempt to bring their own engines to EU users. Apple's choice to geofence engine choice to the EU, indefensible implementation roadblocks, poison-pill distribution terms, and the continued prevarications and falsehoods offered in their defence, are choices that Apple is affirmatively and continually making.
Less effectively, Apple could provide runtime flags for other browsers to enable features in the engine which Apple itself does not use in Safari. Paired with a commitment to implement features in this way on a short timeline after they are launched in other engines on other OSes, competing vendors could risk their own brands without Apple relenting on its single-implementer demands. This option has been available to Apple since the introduction of competing browsers in the App Store. As I have argued elsewhere, near-simultaneous introduction of features is the minimum developers should expect of a firm that skims something like $19BN/yr in profits from the web (a ~95% profit rate, versus current outlays on Safari and WebKit).
Lastly, Apple could simply forbid browsers and web content on iOS. This policy would neatly resolve the entire problem. Removing Safari, along with every other iOS browser, is intellectually and competitively defensible as it removes the “special boy” nature of Safari and WebKit. This would also rid Apple of the ethical stain of continuing to string developers and competitors along within standards venues when it is demonstrably an enemy of those processes. ⇐
Apple regularly goes it alone when it is convinced about a design. We have seen this in areas as diverse as touch events, notch CSS, web payments, "liquid glass" effects, and much else. It is not credible to assume that Apple will only ship APIs that have an official seal of an SDO given Cupertino's rich track record of launch-and-pray web APIs over the years. ⇐
In fairness to Apple regarding a "standards position" for Web MIDI, the feature predates Apple's process. But this brings up the origin of the system.
Why does this repository exist? Shouldn't it be rather obvious what other implementers think of that feature, assuming they are engaged in co-design?
Yes, but that assumes engagement.
Just after the Blink fork, a series of incidents took place in which Chromium engineers extrapolated from vaguely positive-sounding feedback in standards meetings when asked about other vendor's positions as part of the Blink Launch Process. This feedback was not a commitment from Apple (or anyone else) to implement, and various WebKit leaders objected to the charachterisations. As a way to avoid over-reading tea leaves in the absence of more fullsome co-design, the "standards position" process was erected in WebKit (and Gecko) so that Chromium developers could solicit "official" positions in the many instances where they were leading on design, in lieu of clearer (tho long invited) engagement.
If this does not sound like it augurs well for assertions that Apple engages to help shape designs in a timely way...well, you might very well think that. I couldn't possibly comment. ⇐
There may have been Formal Objections (as defined by the W3C process) in private communications, but Member Confidentiality at the W3C precludes me from saying either way. If Apple did object in this way, it will have to provide evidence of that objection for the public record, as I cannot. ⇐
Apple's various objections to powerful features have never tried to square the obvious circle: why are camera and microphone access (via
getUserMedia()) OK, but MIDI et. al are not? What evidence supports the notion that adding chooser-based UIs will lead to pervasive privacy issues that cannot be addressed through mechanisms like those Apple is happy to adopt for Geolocation? Why, despite their horrendous track records, are native apps the better alternative? ⇐Mozilla objected to Web MIDI on various grounds over the years, and after getting utterly roasted by its own users over failing to support the API, shipped support in Firefox 108 (Dec '22).
The larger question of Mozilla's relationship to device APIs was a winding road. It eventually culminated (for me) in a long discussion at a TAG meeting in Stockholm with EKR of TLS and Mozilla fame.
By 2016, Mozilla was licking its wounds from the failure of Firefox OS and retrenching around a less expansive vision of the future of the web. Long gone were the aspirations for "WebAPIs". Just a few short years earlier, Mozilla would have engaged (if not agreed) about work in this space, but an overwhelming tenor of conservativism and desktop-centricity radiated from Mozilla by the time of this overlapping IETF/W3C meeting.
It didn't make the notes, but my personal recollection of how we left things late in the afternoon in Stockholm was EKR claiming that bandwidth for security reviews was the biggest blocker that and that it was fine if we (Chromium) went ahead with these sorts of designs to prove they wouldn't blow up the world. Only then would Mozilla perhaps consider versions of them.
True to his word, Mozilla eventually shipped Web MIDI on EKR's watch. If past is prologue, we'll only need to wait another three to five years before Web Bluetooth et al. join them. ⇐
My memory is famously faulty, and I have been engaged in a long-running battle with Apple's legal folks relating to the suppression of browser choice on iOS. All of that colours my vision, and so here I have tried to disabuse myself of less generous notions by consulting public evidence to support Apple's case.
From what I was able to gather over many hours was overwhelmingly inculpatory. It is not possible from reading these threads and data points — rather relying on my own recollections — to sustain a belief that Apple have either provided timely constructive feedback on tricky APIs, or worked to solve the problems they address. But I am, in the end, heavily biased.
If my conclusions or evidence are wrong, I would very much appreciate corrections; my inbox and DMs are open.
If reliable evidence is provided, I will update this blog post to include it, and I encourage others to post on this topic in opposition to my conclusions. It should not be hard for Apple to make the case, assuming there is evidence to support it, that I've missed important facts. It would have both regulatory and persuasive valence regarding questions I have raised relating to Apple's footprint in the internet standards community. ⇐