Reading List
The most recent articles from a list of feeds I subscribe to.
Shuffling a CSS grid using custom properties
Introduction to CSS if Statements and Conditional Logic
The Performance Inequality Gap, 2026
The Budget, 2026 Edition
Let's cut to the chase, shall we? Updated network test parameters for 2026 are:
- 9 Mbps downlink
- 3 mbps uplink
- 100 millisecond RTT
Regarding devices, my updated recommendations are the Samsung Galaxy A24 4G (or equivalent) and the HP 14. The goal of these recommendations is to emulate a 75th percentile user experience, meaning a full quarter of devices and networks will perform worse than this baseline.
The updated budget calculator reflects this baseline, allowing us to derive critical-path resource thresholds for three and five second page load targets. Per usual, we consider pages built in two styles: JS-light, where only 15% of bytes over the wire are JavaScript, and JS-heavy, comprised of 50% JavaScript:
| Target | Markup-based | JS-based | ||||
|---|---|---|---|---|---|---|
| Total | JS | Other | Total | JS | Other | |
| 3 sec | 2.0MB | 300KB | 1.7MB | 1.2MB | 615KB | 615KB |
| 5 sec | 3.7MB | 570KB | 3.2MB | 2.3MB | 1.15MB | 1.15Mb |
Note: Budgets account for two TLS connections.
Many sites initiate more early connections, reducing time available to download resources. Using four connections cuts the three-second budget by 350KiB, to 1.5 MiB / 935 KiB. The five-second budget loses nearly half a megabyte, dropping to 3.2 / 1.9 MiB.
It pays to adopt H/2 or H/3 and consolidate connections.
These budgets are extremely generous. Even the target of three seconds is lavish; most sites should be able to put up interactive content much sooner for nearly all users.
And yet sites are ballooning with no limiting factor. In April the median mobile page grew to be larger than a copy of DOOM, and the 75th percentile site is now larger than two copies of DOOM. The P90 site is more than 4.5x larger. All have doubled in the past decade. Put another way, the median mobile page is now 70 times larger than the total storage of the computer that landed men on the moon.
This is a technical challenge, but also an ethical crisis. This blog continues to document the tragic consequences for those who most need the help technology can offer. The lies, half-truths, and excuses are, if anything, getting worse.
Through no action of their own, frontend developers have been blessed with more compute and bandwidth every year. Instead of converting that bounty into delightful experiences and positive business results, the dominant culture of frontend has leant into self-aggrandising narratives that venerate failure success. The result is a web that increasingly punishes the poor for their bad luck while paying developers huge salaries to deliver business-undermining results.
As JavaScript grows as a proportion of critical-path resources, the impact of higher CPU cost per byte reduces budgets. Not all JavaScript work can be parallelised and script burns main thread time, reducing budgets.
This coffin corner effect explains why image and CSS-heavy experiences perform better byte-for-byte than sites built with the failed tools of frontend's lost decade.
Embedded in this year's estimates is hopeful news about the trajectory of devices and networks. Compared with early 2024's estimates, we're seeing budget growth of 600+KiB for three seconds, and a full megabyte of extra headroom at five seconds.1
While this not enough to overcome continued growth in payloads, budgets are now an order of magnitude more generous than those first sketched in 2017. It has never been easier to deliver pages quickly, but we are not collectively hitting the mark. The median mobile site now weighs more than 2.6 MiB and the 75th percentile site clocks in at 5.2 MiB.
Mobile JavaScript payloads have more than doubled over the past decade, reaching 680 KiB and 1.3 MiB at P50 and P75 (respectively). These compositional shifts exacerbate latent inequality.
Indeed, the latest CrUX data shows not even half of origins have passing Core Web Vitals scores for mobile users. More than 40% of sites still perform poorly for desktop users, and progress in both cohorts is plateauing:
Core Web Vitals Pass Rate (%)
Source: the Chrome User Experience Report via BigQuery.
To get back to a healthy, competitive web, developers will need to apply considerably more restraint. If past is prologue, moderation is unlikely to arise organically, meaning browsers will need to provide stronger incentives. This will be unpopular, but it is clearly necessary.
Recommended Test Devices and Settings
This series has continually stressed that today's P75 device is yesterday's new phone and that trend continues.
Hardware replacement timelines are slowly elongating, so the devices we design to are gradually ageing as well. For mobile, this means our previous estimate of 18 months for median replacement is now incorrect. Device median ages have now risen above 24 months, and P75 devices may be nearly 3 years old. TechInsights estimates a 23.7% annual replacement rate. The explosive smartphone market growth of the mid 2010s is squarely in the rearview mirror, and so historical Average Selling Prices (ASPs) and replacement dynamics now dominate any discussion of fleet performance.
With all of this in mind, we update our target test device to the Samsung Galaxy A24 4G, a mid-2023 release featuring an SoC fabbed on a 6 nanometre process; a notable improvement over previous benchmark devices.
The A24 sold for less than the global Average Selling Price for smartphones at launch ($250 vs. $353). Because that specific model may be hard to acquire for testing, anything based on the MediaTek Helio G99 or Samsung Semi Exynos 1330 will do just fine; e.g.:
Teams serious about performance should track the low-end cohort instead, sticking with previously acquired Samsung Galaxy A51's, or any late-model device from the Moto E range.2
For link-accurate network throttling, I recommend spending $3 for Throttly for Android. It supports custom network profiles, allowing you to straightforwardly emulate a 9/3/100 network. DevTools throttling will always be inaccurate, and this is the best low-effort way to correctly condition links on your primary test device.
Desktops are not the limiting factor, but it's still helpful to have physical test devices. You should not spend more than $250 (new) on a low-end test laptop. It should have a Celeron processor, eMMC storage, and run Windows. The last point is not an effort to sell more licences, but rather to represent the nasty effects of defender, NTFS, and parasitic background services on system performance. Something like the HP 14 dq3500nr.
Desktop network throttling remains fraught, and the best solutions are still those from Pat Meenan's 2016 article announcing winShaper.
The Big Story Is Still Low-End Android
What we see in our recommended test setups is an echo of the greatest influence of the past decade on smartphone performance: the spread of slow, ever-cheaper Androids with ageing chipsets, riding the cost curve downward, year-on-year.
The explosive growth of this segment drove nearly all market growth between 2011 and 2017. Now that smartphones have reached global saturation, flat sales volumes mirror the long-term trends in desktop device ownership:
At no point in the past dozen years has iOS accounted for more than 20% of new-device shipments. Quarter-by-quarter fluctuations have pushed that number as high as 25% when new models are released, but the effect never lasts.
Most phones — indeed, most computers — are 24+ month old Androids. This is the result of a price segmented market: a preponderance of smartphones sold for more than $600USD (new, unlocked) are iPhones, and the overwhelming majority of devices sold for less than that are slow Androids.
Average Selling Prices
New, unlocked, worldwide, in nominal dollars.
Source: IDC, Statista, and Counterpoint Research.
Not adjusted for inflation. Full year 2025 ASP is extrapolated from first the several quarters and may yet be revised upward.
The “i” in “iPhone” stands for “inequality.”
Global ASPs show the low-end isn't just alive-and-well, it's many multiples of high-end device volume. To maintain a global ASP near $370, an outsized number of cheaper Androids must be sold for every $1K (average) iOS device.
The Landscape
To understand how we got to our budget estimate, we need to peer deeper into the device and network situation. Despite huge, unpredictable shocks in the market (positive: Reliance Jio; negative: a pandemic), the trends this series tracks from current and historical market conditions have allowed us to accurately forecast the future.
Overall market stability lets us reliably reflect and predict the attributes of users at the 75th+ percentile. Those users are almost always on older devices, meaning we don't need to divine what will happen, just remember the recent past.
Mobile
The properties of today's mobile devices define how our sites run in practice. From the continued prevalence of 4G radios, to the shocking gaps in CPU performance, the reality of the modern web is best experienced through real devices. The next best way to understand it is through data.
Per usual, I've built single and multicore CPU performance charts for each price point; we track four groups:
- Fastest iOS device
- Fastest Android
- Mid-range Android ($300-350)
- Low-end Android ($100-150)
More than 2/3 of the smartphone market consists of the slowest cohorts. Performance of JavaScript-based web experiences is heavily correlated with single-core performance, meaning that the P75 device places a hard cap on the amount of JavaScript that is reasonable for any website to rely on.
The performance of budget devices has not meaningfully improved since 2022. The nearly-identical SoCs from each year's device get cheaper, and lower bill-of-materials costs are resulting in reduced retail prices for low-spec phones.
The high end continues to pull away, and as previewed in the prior instalment of this series, are beginning to close the massive performance lead that Apple's A-series chips have opened up over the past decade. This is largely thanks to Qualcomm and MediaTek finally starting to address the cache-gap I have harped on since 2016:
Total SoC Cache (MB)
L1 + L2 + L3 + System-level Caches
Source: Wikipedia, vendor documentation, and author's caclulations.
Some cache sizings are estimates, particularly in the Android ecosystem, where vendor documentation is lacking. SoC vendors (aside from Apple) also have a habit of implementing the smallest values ARM specifies for a licensed core design. The latest iPhone chip (A19 Pro) features truly astonishing amounts of cache. For a sense of scale, the roughly 50iMB of L1, L2, and L3 cache in an iPhone 17 Pro provides 8.3 MiB of cache per core. This is more than double the per core cache of Intel's latest high-end desktop part, the 285K, which provides a comparatively skimpy 3.3 MiB combined cache per core.
The gobsmacking caches of A-series parts allow Apple to keep the beast fed, leading to fewer stalls and more efficient use of CPU time. This advantage redounds to better battery life because well-fed CPUs can retire work faster, returning to lower-power states sooner.
That it has taken this long for even the top end of the Android ecosystem to begin to respond is a scandal.
Single-Core performance per $
Source: GSMArena, Geekbench, and vendor documents.
Geekbench 6 points per dollar at each price point over time. Prices represent cost new, unlocked MSRP at date of launch.
If there's good news for buyers of low-end devices, it's that performance per dollar continues to improve across the board.
Frustratingly, though, low-end single core performance is still 9x slower than contemporary iPhones, and mid-tier devices remain more than 3.5x slower:
Multicore performance tells a similar tale. As a reminder, this metric is less correlated with web application performance than single-core throughput:
Geekbench 6 Multi-Core scores
Source: GSMArena, Geekbench.
When Geekbench reports multiple scores for an SoC, recent non-outlier values are selected.
Performance per dollar looks compelling for the lower tier parts, but recall that they are 1/3 to 1/5 the performance:
Multi-Core performance per $
Source: GSMArena, Geekbench, and vendor documents.
Market Trends
What makes iPhones so bloody fast, while Androids have languished? Several related factors in Apple's chip design and development strategy provided a commanding lead over the past decade:
-
In-house tuning of all ARM-designed cores (thanks to a long-ago negotiated Architecture licence), particularly for cache sizing.
-
Concentration in fewer SoC SKUs enabled concentrated yield optimisation.
-
Willingness to file large early orders to secure access to the latest fabrication nodes from TSMC.
Android vendors, meanwhile, have spread their SoC design and spending budgets in ways that have been penny-wise and pound-foolish. Even Google and Samsung's in-house efforts have failed to replicate the virtuous effects of alignment that Apple's discipline in CPU design have delivered.
Process Node (nm)
Source: GSMArena, Wikipedia, and vendor documentation.
Feature sizes are a fudge below 10 nanometres, but marketing names usually reflect real increases in transistor density, reductions in power use, and frequency scaling potential. Despite some seeming symmetry for fast Android and iOS parts in terms of process node, that's less than half the story, as most Android parts have traded more cores and on-die radios for larger caches. From a performance perspective, this choice has been catastrophic.
Core Counts
Source: GSMArena, Wikipedia, and vendor documentation.
Core counts have long been a fixture of headline device marketing, but even inexpensive devices have featured 8 cores for a decade. This shows definitively that speed comes from properties other than core count; namely appropriate cache sizing, main memory latency, frequency scaling, and chip architecture. Apple's core-count restraint and focus on other aspects should have been a lesson to the entire industry long before 2024.
Smaller features also allow for higher peak frequencies, giving Apple a further advantage from early access to TSMC's latest, smallest, and most power-sipping process nodes:
Core Counts
Source: GSMArena, Wikipedia, and vendor documentation.
Maximum advertised frequency of the fastest on-package core.
All of these trade-offs have allowed Apple to charge a premium for their devices which no other vendor can justify:
Prices vs. Average Selling Price (ASP)
Source: GSMArena, IDC, Statista, and Counterpoint Research.
Prices are new, unlocked MSRP at launch.
Global ASPs are conservative estimates; some research groups estimate values 10-15% higher, but the trends are consistent. The premium end continues to pull away in both price and performance, dragging ASPs slightly upward. Meanwhile, high-end devices continue to be outsold 3-5:1 by low-end phones that aren't getting faster, but are getting increasingly inexpensive, hovering near $100, or 1/10th the price of the cheapest contemporary iPhone that contains Apple's fastest chip.
Rise of the Refurbs
Thanks to market trends, the recent-spec iPhones many web developers carry don't even represent the experience of most iOS users.
It may seem incongruous that the ASP of iOS devices is bumping up against the $1K mark while most developed countries experience bouts of slow growth post-pandemic, along with well-documented cost-of-living crises among middle-class buyers.
One answer to this conundrum, of course, is that Android sales remain strong and Apple is continually cordoned into the segment of the market it can justify to shareholders with a 30% net margin. The other is growth in the resale market, particularly for iOS devices. The extended longevity of these phones thanks to premium components and better-than-Android software support lifetimes has created a vibrant and growing market below the $400 price point for used and refurbished smartphones.
This also helps to explain the flat-to-slightly-declining market for new smartphones, as refurbished devices accelerate past $40BN/yr in sales.
What does this mean? We should expect, and see, higher-than-first-sale volumes of iOS use in various aggregate statistics. Wealth effects have historically explained much of this, but the scale is growing. Refurbishment and resale are now likely to be driving growing discontinuity in the data:
Not only is an iPhone 17 Pro not real life in the overall market, it isn't even real life in the iOS market any more.
Desktop
The situation on desktop is one of overwhelming stasis, modulo the Windows 11 upgrade cycle resulting from Windows 10's EOL at the end of 2025. That has driven an unusually strong cycle of upgrades that will filter through the data in coming years.
This impetus to adopt new hardware is cross pressured by pricing headwinds. Economic uncertainty, tariffs, scrambled component pricing from AI demand for silicon of all sorts, and ever-longer device replacement cycles all mean that new PCs may not provide more than incremental performance gains. As a result, an increase in recent worldwide PC Average Selling Prices from ~$650 in our previous estimate to ~$750 in 2025 (per IDC) may not indicate premiumisation. In a globally connected economy, inflation comes for us all.
Overall, the composition and trajectory of the desktop market remains stable. Despite 2025's device replacement boomlet, IDC predicts stasis in “personal computing device” volumes, with growth bumping along at ±1-2% a year for the next 5 years, and landing just about where things are today. The now-stable baseline of ~410MM devices per year is predicted to be entirely flat into 2030.
Top-line things to remember about desktops are:
-
80% of “desktop” devices are laptops, tablets, and other battery-powered form-factors. The performance impact of thermal and power envelopes for these devices is drastic; many cores are spun down to low power states most of the time; symmetric multiprocessing is now a datacentre curio.
-
Flaky Wi-Fi, rather than wired Ethernet, is now the last mile to most desktops.
-
Desktops (including laptops) are only 15-18% of web-capable device sales; a pattern that has been stable for a decade.
Put another way: if you spend a majority of your time in front of a computer looking at a screen that's larger than 10", you live in a privilege bubble.
Evidence From the Edge User Base
Per previous instalments, we can use Edge's population-level data to understand the evolution of the ecosystem. As of late 2025, the rough breakdown looks like:
| Device Tier | Fleet % | Definition |
| Low-end | 30% | Either: <= 4 cores, or <= 4GB RAM |
| Medium | 60% | HDD (not SSD), or 4-16 GB RAM, or 4-8 cores |
| High | 10% | SSD + > 8 cores + > 16GB RAM |
Compared with the data from early 2024, we see some important shifts:
-
Low-end devices have fallen from ~45% to ~30% of the fleet.
-
Most growth is in the middle tier, growing from 48% to 60%.
-
The high-end is growing slowly.
Because the user base is also growing, it's worth mentioning that the apparent drop in the low-end is a relative change. In absolute terms, the ecosystem is seeing a slower absolute removal of low-spec machines. This matches what we should intuitively expect from incremental growth of the Edge user base, which is heavily skewed to Windows 11.
Older and slower devices likely constitute an even larger fraction of the total market, but may be invisible to us. Indeed, computers with spinning rust HDDs, <= 4GB of RAM, and <= 2 cores still represent millions of active devices in our data. Alarmingly, they have dropped by less than 20% in absolute terms over the past six months. And remember, this isn't even the low end of the low end, as our stats don't include Chromebooks.
Building to the limits of ”feels fine on my MacBook Pro” has never been realistic, but in 2025 it's active malpractice.
Networks
The TL;DR for networks for 2026 is that the P75 connection provides 9Mbps of downlink bandwidth, 3Mbps upload, with 100ms of RTT latency.
This 9Mbps down, 3Mbps up configuration represents sizeable uplift from the 2024/2025 guidance of 7.2Mbps down, 1.4Mbps up, with 94ms RTT, but also a correction. 2024's estimate for latency was probably off by ~15%, and should have been set closer to 110ms.
Global or ambitious firms should target networks even slower than 9/3/100 in their design parameters; the previous 5/1/28 “cable” network configuration is still worth building for, but with an upward adjustment to latency (think 5/1/100). Uneven service has the power to negatively define brands, and the way to remain in user's good graces digitally is to build for the P95+ user. Smaller, faster sites that serve this cohort well will stand out as being exceptional, even on the fastest networks and devices.
Looking forward into 2026 and 2027, the hard reality is that networks remain slower than developers expect and will not improve quickly.
Upload speeds, in particular, remain frustratingly slow, as wider upload channels correlate with faster early-session downloads. Only users on the fastest 10% of networks see downlink:uplink bandwidth rising above 2.5:1 ratios, even under ideal conditions.3
Owing to physics, device replacement rates, CAPEX budgets of mobile carriers, inherent variability in mobile networks (vs. fixed-line broadband), and worldwide regulatory divergence we should expect experiences to be heavily bottlenecked by networks for the foreseeable future.
Previous posts in this series have documented improvements, but as we have been saying since 2021, 4G is a miracle, and 5G is a mirage.
This will remain true for at least another three years, with bandwidth and latency improving only incrementally. Sites that want to reach their full potential, if only to beat the competition, must be to budgets that are inclusive for folks on the margins. When it comes to web performance, doing well is the same as doing good.
Bandwidth
Bandwidth numbers are derived from Cloudflare's incredible Radar dataset.4 Looking at (downlink) bandwidth trends over the past year, what we primarily see is stasis:
So where does the improvement in our estimate come from? Looking back to 2024, we see (predicted) gains emerge, but note their small absolute size:
The gap between P25 and P75 downlinks was 15Mbps at the start of 2024 and has grown to 21Mbps at the end of 2025, or an increase of 40%, meanwhile bottom quartile networks are only 28% faster, improving from ~7Mbps to 9Mbps. In absolute terms, wealthier users saw 3x as much absolute gain.
The performance inequality gap is growing at the network layer too.
Latency
Latency across networks (RTTs) is improving somewhat, with a nearly 10% decrease at P75 over the past year, from ~110ms to ~100ms. Small improvements on faster links (P50, P25) look to be in the 5% range:
Given the variability at higher percentiles, we'll stick with a 100ms target for the 2026 calendar year, although we should expect slight gains.
Underlying these improvements are datacentre build outs in traditionally underserved regions, undersea cable completions, and cellular backhaul improvements from the 5G build out. Faster client CPUs and radios will also contribute meaningfully and predictably.5
All of these factors will continue to incrementally improve, and we predict another ~5% improvement (to 95ms) at P75 for the 2027.
Looking Forward
Gains will be modest for both bandwidth and latency over the next few years. The lion's share of web traffic is now carried across mobile networks, meaning that high percentiles represent users feeling the confounding effects of cellular radios, uneven backhaul, coverage, and the built environment. Those factors are hardest and slowest to change, involving the largest number of potential long tent poles.
As discussed in previous instalments, the world's densest emerging markets reached the smartphone tipping point by 2020, displacing feature phones for most adults. More recently, we have approached saturation; the moment at which smartphone sales are dominated by replacements, rather than first-time sales in those markets. Affordability and literacy remain large challenges to get the residually unconnected online. The 2025 GSMA State of Mobile Internet Connectivity report has a full chapter on these challenges (PDF), complete with rich survey data.
What we see in now-stable smartphone shipment volumes primes the pump for improvements in device performance. First-time smartphone users invest less in their devices (relative to income) as the value may be unclear; users shopping for a second phone have clearer expectations and hopes. Having lived the frustration of slow sites and apps, an incrementally larger set of folks will be willing to pay a bit more to add 5G modems to their next phone.
This effect is working its way down the price curve from the ultra-high-end in 2020 to the mid-tier in 2024, but it has yet to reach the low end. Our latest low-end specimen — 2025's Moto E15 — still features a 4G radio. Because of the additional compute requirements that 5G places on SoCs, we will expect to see process node and CPU performance increases as 5G costs fall far enough to impact the low-price band.
Devices with otherwise similar specs and a 5G radio still command a considerable premium. The Galaxy A16 5G was introduced at a 20% premium over the 4G version, mirroring the mid-market dynamic from 2022 and 2023 where devices were offered in “regular” and (pricier) 5G versions. It will take a transition down the fab node scale like we saw for mid-market SoCs in 2022 and 2024 to make 5G a low-end table-stakes feature. I'm not holding my breath. Given the current market for chips of all types, we may be seeing low-spec SoCs produced 12 nm for several more years, reprising the long-term haunting of Android by A53 cores produced at 28 nm from 2012 to 2019.
Content Trends
The budget estimates generated in this series may seem less relevant now that tools like Core Web Vitals provide nuanced, audience-specific metrics that allow us to characterise important UX attributes. But this assumption is flawed thanks to the effects of deadweight losses and the biases inherent in those metrics. Case-in-point: last year CWV deprecated FID and replaced it with the (better calibrated) INP metric. This predictably dropped the CWV pass rates of sites built on desktop-era JavaScript frameworks like React, Angular, and Ember:
RUM data, in isolation, undercounts the opportunity costs of slow and bloated experiences. Users have choices, and lost users do not show up in usage-based statistics.
A team I worked with this year saw these effects play out directly in their CrUX data:
This high-profile, fast-growing site added nearly 100 KiB of critical-path JavaScript per month from January to June. The result? A growing share of visits from desktop devices, and proportionally fewer mobile users every month. Once we began to fumigate for dead code and overeager preloading, mobile users returned.
These effects can easily overcome other factors, particularly in the current era of JavaScript bundle hyper-growth:
Are SPAs Working?
Perhaps the most important insight I spotted while re-immersing myself in the data for this post were the implications of these charts from the RUM Archive:
The top-line takeaway is chilling: sites that are explicitly designed as SPAs, and which have intentionally opted in to metrics measurement around soft-navigations are seeing one (1) soft-navigation for every full page load on average. The rinky-dink model we discussed last year for the appropriateness of investing in SPA-based stacks is a harsh master, defining something like average experienced performance over a session in terms of the sum of interaction latencies, including initial navigation:
If the RUM Archive's data is directionally correct, at an ecosystem level, for both mobile and desktop. Sessions this shallow as to make a mockery of the idea that we can justify more up-front JavaScript to deliver SPA technology, even on sites with reason to believe it would help.
In private correspondence, Michal Mocny shared an early analysis from data collected via the Soft Navigations API Origin Trial. Unlike the Akamai mPulse data that feeds the RUM Archive, Chromium's data tracks interactions from all sites, not only those that have explicitly opted-in to track soft navigations, providing a much wider aperture of data. On top-10K origins, Chrome is currently observing values for between 1.6 and 2.2, depending on how the analysis is run, or 0.8-1.1 additional soft navigations per initial page load.
It's difficult to convey the earth-shattering magnitude of these congruent findings. Under these conditions, the amount of JavaScript a developer can justify up-front to support follow-on in-page navigations is de minimis.6
This should shake our industry to the bone, driving rapid reductions in emitted JavaScript. And yet:
Analysis and Conclusions
This series has three main goals:
-
Provide a concrete set of page-weight targets for working web developers.
-
Arm teams with an understanding of how the client-side computing landscape is evolving.
-
Show how budgets are constructed, giving teams tools to construct their own estimates from their own RUM and market data.
This is not altruism. I want the web to win. I began to raise the alarm about the problems created by a lack of adaptation to mobile's constraints in 2016, and they have played out on the same trend-line I feared. The web is now decisively losing the battle for relevance.
To reverse this trend, I believe several (currently unmet) conditions must be fulfilled:
-
Reaching users via the web must be cost-competitive with other digital channels, meaning that re-engagement features like being on the home screen and in the notification tray must work correctly.
-
The web must deliver the 80/20 set of critical capabilities for most of the important JTBDs in modern computing, but in a webby (safe, privacy-respecting by default) way.
-
Web experiences, on average, have to feel responsive enough that the idea of tapping a link doesn't inspire dread.
But the web is not winning mobile. Apple, Google, and Facebook nearly extinguished the web's potential to disrupt their cosy arrangement. Preventing the web from breaking out — from meaningfully delivering app-like experiences outside an app store — is essential to maintaining dominance. But some are fighting back, and against the odds, it's working.
What's left, then, is the subject of this series. Even if browser competition comes to iOS and competitors deliver the features needed to make the web a plausible contender, the structure of today's sites is an impediment to a future in which users prefer the web.
Most of the world's computing happens on devices that are older and slower than anything on a developer's desk, and connected via networks that contemporary “full-stack” developers don't emulate. Web developers almost never personally experience these constraints, and over frontend's Lost Decade, this has created an out-of-touch privilege bubble that poisons the products of the teams that follow the herd, as well as the broader ecosystem.
That's bleak, but the reason I devote weeks to this research each year isn't to scold. The hope is that actionable targets can help shape choices, and that by continuing to stay engaged with the evolution of the landscape, we can see green chutes sprout.
If we can hold down the rate of increase in critical-path resource growth it will give hardware progress time to overtake our excesses. If we make enough progress in that direction, we might even get back to a place where the web is a contender in the minds of users.
And that's a future worth working for.
FOOTNOTES
I really do try to avoid being an unremitting downer, but the latest device in our low-cost cohort — the Motorola E15 — is not an improvement in CPU benchmarks from last year.
More worrying, no device in that part of the market has delivered meaningful CPU gains since 2020. That's five years of stasis, and a return to a situation where new low-end devices are half as fast as their mid-tier contemporaries. Even as process node improvements trickle down to the $300-350 price bracket, the low end is left further and further behind.
As the wider Android ecosystem experienced from 2015-2020, devices with the same specs are getting cheaper, but not better. This allows them to open new markets and sell in massive numbers, helping to prop up overall annual device sales, even as devices last longer and former growth markets (India, Indonesia, etc.) hit smartphone saturation. This is reflected in the low-end models finally sinking below $100USD new, unlocked, at retail. But to hit this price-point, they deliver performance on par with 2019's mid-tier Galaxy A50; a phone whose CPU was fabbed on a smaller process node than today's latest low-end phones.
Services trying to scale, and anyone trying to build for emerging markets, should be anchoring P90 or P95, not P75. For serious shops, the target has not moved much at all since 2019.
This reality alone is enough to justify rejection of frameworkist hype-shilling, without even discussing the negative impacts of JS-first stacks on middle-distance team velocity and site maintenance costs. ⇐
Because low-end and medium-tier devices were so similar until very recently, this differentiation wasn't necessary. But progress in process nodes does eventually trickle down. The mid-tier began to see improvement away from the (utterly blighted) 28nm node in 2017, a mere 4 years after the high-end decamped for greener pastures. The low-end, meanwhile, was trapped in that register until 2020, nearly a decade after 28nm was first introduced. Since then, the mid tier has tracked process node scaling with a 2-3 year delay, while the low end has gotten stuck since 2021 at 12nm.
The failure to include meaningful amounts of cache in Android SoCs levelled out low-end and medium tier performance until 2023, but transistor shrinkage and architecture improvements at ~$350 are now decoupling the performance of these tiers once again, and we should expect to see them grow further apart in coming years, creating a worrying gap between P90+ and P75 devices. ⇐
Cloudflare's worldwide bandwidth data only provides downlink estimates, and so to understand the downlink:uplink bandwidth ratio, I turned instead to their API and queried CF's speed test data, which provide fuller histograms.
These tests are explicitly run by users, meaning they occur under synthetic conditions and likely with a skew towards best-case network performance. They also report maximums over a session, rather than loaded network behaviour, which explains the divergence between the higher speed values reported there than the more realistic "Internet Quality Index" dataset we use for primary bandwidth and latency analysis.
The data we can derive from it is therefore much rosier, but it does give us a sense for downlink/uplink ratios (bitrates are in Mbps):
%-ile Download Upload Ratio 20th 5 15 3 25th 25 10 2.5 30th 10 30 3 40th 20 50 2.5 median 85 35 2.4 60th 125 50 2.5 70th 220 85 2.6 75th 280 100 2.8 80th 345 140 2.5 90th 525 280 1.9 Because the data is skewed in an optimistic direction (thanks to usage biases towards wealth, which correlates with high-performance networks), we pick a 3:1 ratio in our global baseline.
Despite variance in the lower percentiles, it is reasonable to expect tough ratios in the bottom quartile given the build-out properties of various network types. These include:
- Asymmetries in cable and DSL channel allocations.
- Explicit frequency/bandwidth allocation in cellular networks.
- Radio power lopsidedness vs. the base stations they connect to, particularly for battery-powered devices.
Even new networks like Starlink are spec'd with 10:1 or greater ratios. Indeed, despite being "fast", the author's own home fixed-line connection has a ratio grater than 30:1. We should expect many such discrepancies up and down the wealth spectrum.
A 4:1 or 5:1 ratio is probably justified, and previous estimates used 5:1 ratios for that reason. Lacking better data, going with 3:1 is a judgement call, and I welcome feedback on the choice. ⇐
Why am I relying on Cloudflare's data?
Google, Microsoft, Amazon, Fastly, Akamai, and others obviously have similar data (at least in theory), but do not publish it in such a useful and queryable way. That said, these estimates are on trend with my priors about the network situation developed from many sources over the years (including non-public datasets).
There is a chance Cloudflare's data is unrepresentative, but given their CDN market penetration, my primary concern is that their data is too rosy, rather than too generous. Why? Geographic and use-based bias effects.
The wealthy are better connected and heavier internet users, generating more sessions. Better performance of experiences increases engagement, so we know CF's data contains a bias towards the experiences of the affluent. This potentially blinds us to large fraction of the theoretical TAM and (I think) convincingly argues that we should be taking a P90 value instead of P75. However, we stick with P75 for two reasons:
- It would be incongruent to cite P90 this year without first introducing it in previous installations.
- A lack of explicitly triangulating data from the current network environment makes it challenging to judge the magnitude of use-based biases in the data.
Thankfully, Cloudflare also produces country-level data. We can use this to cabin the scale of potential issues in global data. Here, for instance, are the P75 network situations for a few populous geos that every growth-oriented international brand must consider in descending downlink speed:
Geo @ P75 Down Up RTT Pop (MM) UK 21 7 34 69 USA 17 5.5 47 340 Brazil 12 4 60 213 Global 9 3 100 Indonesia 6.4 2.1 75 284 India 6.2 2.1 85 1,417 Pakistan 4 1.3 130 241 Nigeria 3.1 1 190 223 Underweighting the less-affluent is a common bias in tech, and my consulting experience has repeatedly reconfirmed what Tammy Everts writes about when it comes to the opportunities that are available when sites push past performance plateaus.
There is no such thing as “too fast”, but most teams are so far away from minimally acceptable results that they have never experienced the huge wins on the other side of truly excellent and consistent performance. Entire markets open up when teams expand access through improved performance, and wealthy users convert more too.
It's this reality that lemon vendors have sold totally inappropriate tools into, and the results remain shockingly poor. ⇐ ⇐
As we mentioned in the last instalment, improvements in mid-tier and low-end mobile SoCs are delivering better network performance independent of protocol and spectrum improvements.
Modern link-layer cell and Wi-Fi stacks rely heavily on client-side compute for the digital signal processing necessary to implement advanced techniques like MIMO beam forming.
This makes the device replacement rates doubly impactful, even within radio generations and against fixed channel capacity. As process improvements trickle down (glacially) to mid-tier and low-end SoCs, the radios they contain also get faster processing, improving latency and throughput, ceteris paribus. ⇐
The RUM Archive's soft-to-hard navigations ratio and the early data from the Chromium Soft Navigations Origin Trial leave many, many questions unanswered including, but not limited to:
- What's the distribution?
- Globally: do some SPA-premised sites have many more or many fewer soft-navigations? Are only a few major sites pushing the ratios up (or down)?
- Locally: can we characterise user's sessions to understand what fraction trigger many soft-navigations per session?
- Do other data sources agree?
- Will the currently-running Origin Trial for Soft Navigations continue to agree as the reach grows?
- Can other RUM vendors validate or refute these insights?
- What about in-page changes not triggering URL updates?
- How should infinite-scrolling be counted?
- We should expect Chromium histogram data to capture more of this vs. the somewhat explicit instrumentation of mPulse, driving up soft-navigations per hard navigation. Do things stay in sync in these data sets over time?
Given the scale of the mystery, a veritable stampede of research in the web performance community should follow. I hope to see an explosion of tools to guide teams toward the most appropriate architectures based on comparative data within their verticals, first-party RUM data about session lengths, distribution mono/bi/tri/multi-modality of sessions, and other situated factors.
The mystery I have flicked at in the past is now hitting us smack in the face. Will we pay attention? ⇐
- What's the distribution?
Italy workation, Romania, and Portugal ✈️
The last couple of months have been quite a bit of traveling. Two pre-planned trips and one quite impromptu one to just get away somewhere warm for a bit.
Italy workation, Romania, and Portugal ✈️
The last couple of months have been quite a bit of traveling. Two pre-planned trips and one quite impromptu one to just get away somewhere warm for a bit.