Reading List
The most recent articles from a list of feeds I subscribe to.
Progress Delayed Is Progress Denied
Update (June 16th, 2021): Folks attempting to build mobile web games have informed me that the Fullscreen API remains broken on iOS for non-video elements. This hobbles gaming and immersive media experiences in a way that is hard to overstate. Speaking of being hobbled, the original post gave Apple credit for eventually shipping a useable implementation of IndexedDB. It seems this was premature.
Three facts...
- Apple bars web apps from the only App Store allowed on iOS.[1]
- Apple forces developers of competing browsers to use their engine for all browsers on iOS, restricting their ability to deliver a better version of the web platform.
- Apple claims that browsers on iOS are platforms sufficient to support developers who object to the App Store's terms .
...and a proposition:
Apple's iOS browser (Safari) and engine (WebKit) are uniquely under-powered. Consistent delays in the delivery of important features ensure the web can never be a credible alternative to its proprietary tools and App Store.
This is a bold assertion, and proving it requires overwhelming evidence. This post mines publicly available data on the pace of compatibility fixes and feature additions to assess the claim.
Steve & Tim's Close-up Magic #
Misdirections often derail the debate around browsers, the role of the web, and App Store policies on iOS. Classics of the genre include:
Apple's just focused on performance!
...that feature is in Tech Preview
Apple's trying, they just added <long-awaited feature>
These points can be simultaneously valid and immaterial to the web's fitness as a competent alternative to native app development on iOS.
We have to check reservoir levels and seasonal rainfall to know if we're in a drought. It might be raining features right this instant, but weather isn't climate. We should look at trends rather than individual releases to understand the gap Apple created and maintains between the web and native.
Before we get to measuring water levels, I want to make some things excruciatingly clear.
First, what follows is not a critique of individuals on the Safari team or the WebKit project; it is a plea for Apple to fund their work adequately[2] and allow competition. They are, pound for pound, some of the best engine developers and genuinely want good things for the web. Apple Corporate is at fault, not Open Source engineers or the line managers who support them.
Second, projects having different priorities at the leading edge is natural and healthy. So is speedy resolution and agreement. What's unhealthy is an engine trailing far behind for many years. Even worse are situations that cannot be addressed through browser choice. It's good for teams to be leading in different areas, assuming that the "compatible core" of features continues to expand at a steady pace. We should not expect uniformity in the short run — it would leave no room for leadership[3].
Lastly, while this post does measure the distance Safari lags, let nobody mistake that for the core concern: iOS App Store policies that prevent meaningful browser competition are at issue here.
Safari trails competing macOS browsers by roughly the same amount, but it's not a crisis because genuine browser choice enables meaningful alternatives.
macOS Safari is compelling enough to have maintained 40-50% share for many years amidst stiff competition. Safari has many good features, and in an open marketplace, choosing it is entirely reasonable.
The Performance Argument #
As an engineer on a browser team, I've been privy to the blow-by-blow of various performance projects, benchmark fire drills, and the ways performance marketing impacts engineering priorities.
All modern browsers are fast, Chromium and Safari/WebKit included. No browser is always fastest. As reliably as the Sun rises in the East, new benchmarks launch projects to re-architect internals to pull ahead. This is as it should be.
Healthy competitions feature competitors trading the lead with regularity. Performance Measurement is easy to get wrong. Spurious reports of "10x worse" performance merit intense scepticism, as they tend instead to be mismeasurement. This makes sense given the intense focus of all browser teams on performance.
After 20 years of neck-in-neck competition, often starting from common code lineages, there just isn't that much left to wring out of the system. Consistent improvement is the name of the game, and it can still have positive impacts, particularly as users lean on the system more heavily over time.
All browsers are deep into the optimisation journey, forcing complex tradeoffs. Improving things for one type of device or application can regress them for others. Significant gains today tend to come from (subtly) breaking contracts with developers in the hopes users won't notice. There isn't a massive gap in focus on performance engineering between engines.
Small gaps and a frequent hand-off of the lead imply differences in capability and correctness aren't the result of one team focusing on performance while others chase different goals[4].
Finally, the choice to fund feature and correctness work is not mutually exclusive to improving performance. Many delayed features on the list below would allow web apps to run faster on iOS. Internal re-architectures to improve correctness often yield performance benefits too.
The Compatibility Tax #
Web developers are a hearty bunch; we don't give up at the first whiff of bugs or incompatibility between engines. Deep wells of knowledge and practice centre on the question: "how can we deliver a good experience to everyone despite differences in what their browsers support?"
Adaptation is a way of life for skilled front enders.
The cultural value of adaptation has enormous implications. First, web developers don't view a single browser as their development target. Education, tools, and training all support the premise that supporting more browsers is better (ceteris paribus), creating a substantial incentive to grease squeaky wheels. Therefore, bridging the gap between leading and trailing-edge browsers is an intense focus of the web development community. Huge amounts of time and effort are spent developing workarounds (preferably with low runtime cost) for lagging engines[5]. Where workarounds fail, cutting features and UI fidelity is understood to be the right thing to do.
Compatibility across engines is key to developer productivity. To the extent that an engine has more than 10% share (or thereabouts), developers tend to view features it lacks as "not ready". It's therefore possible to deny web developers access to features globally by failing to deliver them at the margin.
A single important, lagging engine can make the whole web less competitive this way.
To judge the impact of iOS along this dimension, we can try to answer a few questions:
- How far behind both competing engines is Safari regarding correctness?
- When Safari has implemented essential features, how often is it far ahead? Behind?
Thanks to the Web Platform Tests project and wpt.fyi
, we have the makings of an answer for the first:
The yellow Safari line is a rough measure of how often other browsers are compatible, but Safari's implementation is wrong. Conversely, the much lower Chrome and Firefox lines indicate Blink and Gecko are considerably more likely to agree and be correct regarding core web standards[6].
wpt.fyi
's new Compat 2021 dashboard narrows this full range of tests to a subset chosen to represent the most painful compatibility bugs:
In almost every area, Apple's low-quality implementation of features WebKit already supports requires workarounds. Developers would not need to find and fix these issues in Firefox (Gecko) or Chrome/Edge/Brave/Samsung Internet (Blink). This adds to the expense of developing for iOS.
Converging Views #
The Web Confluence Metrics project provides another window into this question.
This dataset is derived by walking the tree of web platform features exposed to JavaScript, an important subset of features. The available data goes back further, providing a fuller picture of the trend lines of engine completeness.
Engines add features at different rates, and the Confluence graphs illuminate both the absolute scale of differences and the pace at which releases add new features. The data is challenging to compare across those graphs, so I extracted it to produce a single chart:
Higher is better.
In line with Web Platform Tests data, Chromium and Firefox implement more features and deliver them to market more steadily. From this data, we see that iOS is the least complete and competitive implementation of the web platform, and the gap is growing. At the time of the last Confluence run, the gap had stretched to nearly 1000 APIs, doubling since 2016.
Perhaps counting APIs gives a distorted view?
Some minor additions (e.g. CSS's new Typed Object Model) may feature large expansions in API surface. Likewise, some transformative APIs (e.g. webcam access via getUserMedia()
or Media Sessions) may only add a few methods and properties.
To understand if intuitions formed by the Web Confluence data are directionally correct, we need to look more deeply at the history of feature development and connect APIs to the types of applications they enable.
Material Impacts #
Browser release notes and caniuse tables since Blink forked from WebKit in 2013[7] capture the arrival of features in each engine over an even longer period than either WPT or the Confluence dataset. This record can inform a richer understanding of how individual features and sets of capabilities unlock new types of apps.
Browsers sometimes launch new features simultaneously (e.g., CSS Grid and ES6). More often, there is a lag between the first and the rest. To provide a sizeable "grace period", and account for short-run differences in engine priorities, we look primarily at features with a gap of three years or more[8].
What follows is an attempt at a full accounting of features launched in this era. A summary of each API and the impact of its absence accompanies every item.
Where Chrome Has Lagged #
It's healthy for engines to have different priorities, leading every browser to avoid certain features. Still, mistakes have been made, and Chrome has missed several APIs for 3+ years:
- Storage Access API
-
Introduced in Safari three years ago, this anti-tracking API was under-specified, leading to significant divergence in API behaviour across implementations. The low quality of Apple's initial versions of "Intelligent Tracking Prevention" created a worse tracking vector(pdf) (subsequently repaired)[9].
On the positive side, this has spurred a broader conversation around privacy on the web, leading to many new, better-specified proposals and proposed models.
- CSS Snap Points
-
Image carousels and other touch-based UIs are smoother and easier to build using this feature. Differences within the Blink team about the correct order to deliver this vs. Animation Worklets led to regrettable delays.
- Initial Letter
-
An advanced typography feature, planned in Blink once the LayoutNG project finishes.
position: sticky
-
Makes "fixed" elements in scroll-based UIs easier to build. The initial implementation was removed from Blink post-fork and re-implemented on new infrastructure several years later.
- CSS
color()
-
Wide gamut colour is important in creative applications. Chrome does not yet support this for CSS, but is under development for
<canvas>
and WebGL. - JPEG 2000
- HEVC/H.265
-
Next-generation video codecs, supported in many modern chips, but also a licensing minefield. The open, royalty-free codec AV1 has been delivered instead.
Where iOS Has Lagged #
Some features in this list were launched in Safari but were not enabled for other browsers forced to use WebKit on iOS (e.g. Service Workers, getUserMedia
). In these cases, only the delay to shipping in Safari is considered.
getUserMedia()
-
Provides access to webcams. Necessary for building competitive video experiences, including messaging and videoconferencing.
These categories of apps were delayed on the web for iOS by five years.
- WebRTC
-
Real-time network protocols for enabling videoconferencing, desktop sharing, and game streaming applications.
Delayed five years.
- Gamepad API
-
Fundamental for enabling the game streaming PWAs (Stadia, GeForce NOW, Luna, xCloud) now arriving on iOS.
Delayed five years.
- Audio Worklets
-
Audio Worklets are a fundamental enabler for rich media and games on the web. Combined with WebGL2/WebGPU and WASM threading (see below), Audio Worklets unlock more of a device's available computing power, resulting in consistently good sound without fear of glitching.
After years of standards discussion and the first delivered to other platforms in 2018, iOS 14.5 finally shipped Audio Worklets this week.
- IndexedDB
-
A veritable poster-child for the lateness and low quality of Safari amongst web developers, IndexedDB is a modern replacement for the legacy WebSQL API. It provides developers with a way to store complex data locally.
Initially delayed by two years, first versions of the feature were so badly broken on iOS that independent developers began to maintain lists of show-stopping bugs.
Had Apple shipped a usable version in either of the first two attempts, IndexedDB would not have made the three-year cut. The release of iOS 10 finally delivered a workable version, bringing the lag with Chrome and Firefox to four and five years, respectively.
- Pointer Lock
-
Critical for gaming with a mouse. Still not available for iOS or iPadOS.
Update: some commenters seem to sneer the the idea of using a mouse for gaming on iOS, but it has been reported to browser teams as a key feature by the teams building game streaming PWAs. It appears a sizeable set of users use external mice and keyboards with their iPads, and the entire categories of games are functionally unusable on these platforms without Pointer Lock.
- Media Recorder
-
Fundamentally enabling for video creation apps. Without it, video recordings must fit in memory, leading to crashes.
This was Chrome's most anticipated developer feature ever (measured by stars). It was delayed by iOS for five years.
- Pointer Events
-
A uniform API for handling user input like mouse movements and screen taps that is important in adapting content to mobile, particularly regarding multi-touch gestures.
First proposed by Microsoft, delayed three years by Apple[10].
- Service Workers
-
Key API enabling modern, reliable offline web experiences and PWAs.
Delayed more than three years (Chrome 40, November 2014 vs. Safari 11.1, April 2018, but not usable until several releases later).
- WebM and VP8/VP9
-
Royalty-free codecs and containers; free alternatives to H.264/H.265 with competitive compression and features. Lack of support forces developers to spend time and money transcoding and serving to multiple formats (in addition to multiple bitrates).
Supported only for use in WebRTC but not the usual mechanisms for media playback (
<audio>
and<video>
). Either delayed 9 years or still not available, depending on use. - CSS Typed Object Model
-
A high-performance interface to styling elements. A fundamental building block that enables other "Houdini" features like CSS Custom Paint.
Not available for iOS.
- CSS Containment
-
Features that enable consistently high performance in rendering UI, and a building block for new features that can dramatically improve performance on large pages and apps.
Not available for iOS.
- CSS Motion Paths
- Media Source API (a.k.a. "MSE")
-
MSE enables the
MPEG-DASH
video streaming protocol. Apple provides an implementation of HLS, but prevents use of alternatives.Only available on iPadOS.
element.animate()
-
Browser support for the full Web Animations API has been rocky, with Chromium, Firefox, and Safari all completing support for the full spec the past year.
element.animate()
, a subset of the full API, has enabled developers to more easily create high-performance visual effects with a lower risk of visual stuttering in Chrome and Firefox since 2014. EventTarget
Constructor-
Seemingly trivial but deceptively foundational. Lets developers integrate with the browser's internal mechanism for message passing.
Delayed by nearly three years on iOS.
- Web Performance APIs
-
iOS consistently fails to provide modern APIs for measuring web performance by three or more years. Delayed or missing features are not limited to:
navigator.sendBeacon()
- Paint Timing (delayed two-to-four years)
- User Timing
- Resource Timing
- Performance Observers
The impact of missing Web Performance APIs is largely a question of scale: the larger the site or service one attempts to provide on the web, the more important measurement becomes.
fetch()
and Streams-
Modern, asynchronous network APIs that dramatically improve performance in some situations.
Delayed two to four years, depending on how one counts.
Not every feature blocked or delayed on iOS is transformative, and this list omits cases that were on the bubble (e.g., the 2.5 year lag for BigInt). Taken together, the delays Apple generates, even for low-controversy APIs, makes it challenging for businesses to treat the web as a serious development platform.
The Price #
Suppose Apple had implemented WebRTC and the Gamepad API in a timely way. Who can say if the game streaming revolution now taking place might have happened sooner? It's possible that Amazon Luna, NVIDIA GeForce NOW, Google Stadia, and Microsoft xCloud could have been built years earlier.
It's also possible that APIs delivered on every other platform, but not yet available on any iOS browser (because Apple), may unlock whole categories of experiences on the web.
While dozens of features are either currently, or predicted to be, delayed multiple years by Apple, a few high-impact capabilities deserve particular mention:
- WebGL2
-
The first of two modern 3D graphics APIs currently held up by Apple, WebGL2 dramatically improves the visual fidelity of 3D applications on the web, including games. The underlying graphics capabilities from OpenGL ES 3.0 have been available in iOS since 2013 with iOS 7.0. WebGL 2 launched for other platforms on Chrome and Firefox in 2017. While WebGL2 is in development in WebKit, the anticipated end-to-end lag for these features is approaching half a decade.
- WebGPU
-
WebGPU is a successor to WebGL and WebGL2 that improves graphics performance by better aligning with next-gen low-level graphics APIs (Vulkan, Direct3D 12, and Metal).
WebGPU will also unlock richer GPU compute for the web, accelerating machine learning and media applications. WebGPU is likely to ship in Chrome in late 2021. Despite years of delay in standards bodies at the behest of Apple engineers, the timeline for WebGPU on iOS is unclear. Keen observers anticipate a minimum of several years of additional delay.
- WASM Threads and Shared Array Buffers
-
Web Assembly ("WASM") is supported by all browsers today, but extensions for "threading" (the ability to use multiple processor cores together) are missing from iOS.
Threading support enables richer and smoother 3D experiences, games, AR/VR apps, creative tools, simulations, and scientific computing. The history of this feature is complicated, but TL;DR, they are now available to sites that opt in on every platform save iOS. Worse, there's no timeline and little hope of them becoming available soon.
Combined with delays for Audio Worklets, modern graphics APIs, and Offscreen Canvas, many compelling reasons to own a device have been impossible to deliver on the web.[11]
- WebXR
-
Now in development in WebKit after years of radio silence, WebXR APIs provide Augmented Reality and Virtual Reality input and scene information to web applications. Combined with (delayed) advanced graphics APIs and threading support, WebXR enables immersive, low-friction commerce and entertainment on the web.
Support for a growing list of these features has been available in leading browsers across other platforms for several years. There is no timeline from Apple for when web developers can deliver equivalent experiences to their iOS users (in any browser).
These omissions mean web developers cannot compete with their native app counterparts on iOS in categories like gaming, shopping, and creative tools.
Developers expect some lag between the introduction of native features and corresponding browser APIs. Apple's policy against browser engine choice adds years of delays beyond the (expected) delay of design iteration, specification authoring, and browser feature development.
These delays prevent developers from reaching wealthy users with great experiences on the web. This gap, created exclusively and uniquely by Apple policy, all but forces businesses off the web and into the App Store where Apple prevents developers from reaching users with web experiences.
Just Out Of Reach #
One might imagine five-year delays for 3D, media, and games might be the worst impact of Apple's policies preventing browser engine progress. That would be mistaken.
The next tier of missing features contains relatively uncontroversial proposals from standards groups that Apple participates in or which have enough support from web developers to be "no-brainers". Each enables better quality web apps. None are expected on iOS any time soon:
- Scroll Timeline for CSS & Web Animations
-
Likely to ship in Chromium later this year, enables smooth animation based on scrolling and swiping, a key interaction pattern on modern mobile devices.
No word from Apple on if or when this will be available to web developers on iOS.
content-visibility
-
CSS extensions that dramatically improve rendering performance for large pages and complex apps.
- WASM SIMD
-
Coming to Chrome next month, WASM SIMD enables high performance vector math for dramatically improved performance for many media, ML, and 3D applications.
- Form-associated Web Components
-
Reduces data loss in web forms and enables components to be easily reused across projects and sites.
- CSS Custom Paint
-
Efficiently enables new styles of drawing content on the web, removing many hard tradeoffs between visual richness, accessibility, and performance.
- Trusted Types
-
A standard version of an approach demonstrated in Google's web applications to dramatically improve security.
- CSS Container Queries
-
A top request from web developers and expected in Chrome later this year, CSS Container Queries enable content to better adapt to varying device form-factors.
<dialog>
-
A built-in mechanism for a common UI pattern, improving performance and consistency.
inert
Attribute-
Improves focus management and accessibility.
- Browser assisted lazy-loading
-
Reduces data use and improves page load performance.
Fewer of these features are foundational (e.g. SIMD). However, even those that can be emulated in other ways still impose costs on developers and iOS users to paper over the gaps in Apple's implementation of the web platform. This tax can, without great care, slow experiences for users on other platforms as well[12].
What Could Be #
Beyond these relatively uncontroversial (MIA) features lies an ocean of foreclosed possibility. Were Apple willing to allow the sort of honest browser competition for iOS that macOS users enjoy, features like these would enable entirely new classes of web applications. Perhaps that's the problem.
Some crucial features (shipped on every other OS) that Apple is preventing any browser from delivering to iOS today, in no particular order:
- Push Notifications
-
In an egregious display of anti-web gate-keeping, Apple has implemented for iOS neither the long-standard Web Push API nor Apple's own, entirely proprietary push notification system for macOS Safari
It's difficult to overstate the challenges posed by a lack of push notifications on a modern mobile platform. Developers across categories report a lack of push notifications as a deal-killer, including:
- Chat, messaging, and social apps (for obvious reasons)
- e-commerce (abandoned cart reminders, shipping updates, etc.)
- News publishers (breaking news alerts)
- Travel (itinerary updates & at-a-glance info)
- Ride sharing & delivery (status updates)
This omission has put sand in the web's tank — to the benefit of Apple's native platform, which has enjoyed push notification support for 12 years.
- PWA Install Prompts
-
Apple led the way with support for installing certain web apps to a device's homescreen as early as iOS 1.0. Since 2007, support for these features has barely improved.
Subsequently, Apple added the ability to promote the installation of native apps, but has not provided equivalent "install prompt" tools for web apps.
Meanwhile, browsers on other platforms have developed both ambient (browser provided) promotion and programmatic mechanisms to guide users in saving frequently-used web content to their devices.
Apple's maintenance of this feature gap between native and web (despite clear underlying support for the mechanism) and unwillingness to allow other iOS browsers to improve the situation[13], combined with policies that prevent the placement of web content in the App Store, puts a heavy thumb on the scale for discovering content built with Apple's proprietary APIs.
- PWA App Icon Badging
-
Provides support for "unread counts", e.g. for email and chat programs. Not available for web apps added to the home screen on iOS.
- Media Session API
-
Enables web apps to play media while in the background. It also allows developers to plug into (and configure) system controls for back/forward/play/pause/etc. and provide track metadata (title, album, cover art).
Lack of this feature prevents entire classes of media applications (podcasting and music apps like Spotify) from being plausible.
In development now, but if it ships this fall (the earliest window), web media apps will have been delayed more than five years.
- Navigation Preloads
-
Dramatically improve page loading performance on sites that provide an offline experience using Service Workers.
Multiple top-10 web properties have reported to Apple that lack of this feature prevents them from deploying more resilient versions of their experiences (including building PWAs) for users on iOS.
- Offscreen Canvas
-
Improves the smoothness of 3D and media applications by moving rendering work to a separate thread. For latency-sensitive use-cases like XR and games, this feature is necessary to consistently deliver a competitive experience.
TextEncoderStream
&TextDecoderStream
-
These TransformStream types help applications efficiently deal with large amounts of binary data. They may have shipped in iOS 14.5 but the release notes are ambiguious.
requestVideoFrameCallback()
-
Helps media apps on the web save battery when doing video processing.
- Compression Streams
-
Enable developers to compress data efficiently without downloading large amounts of code to the browser.
- Keyboard Lock API
-
An essential part of remote desktop apps and some game streaming scenarios with keyboards attached (not uncommon for iPadOS users).
- Declarative Shadow DOM
-
An addition to the Web Components system that powers applications like YouTube and Apple Music. Declarative Shadow DOM can improve loading performance and help developers provide UI for users when scripts are disabled or fail to load.
- Reporting API
-
Indispensable for improving the quality of sites and avoid breakage due to browser deprecations. Modern versions also let developers know when applications crash, helping them diagnose and repair broken sites.
- Permissions API
-
Helps developers present better, more contextual options and prompts, reducing user annoyance and "prompt spam".
- Screen Wakelock
-
Keeps the screen from going dark or a screen saver taking over. Important for apps that present boarding passes and QR codes for scanning, as well as and presentation apps (e.g. PowerPoint or Google Slides).
- Intersection Observer V2
-
Reduces ad fraud and enables one-tap-sign-up flows, improving commerce conversion rates.
- Content Indexing
-
An extension to Service Workers that enables browsers to present users with cached content when offline.
- AV1/AVIF
-
A modern, royalty-free video codec with near-universal support outside Safari.
- PWA App Shortcuts
-
Allows developers to configure "long press" or "right-click" options for web apps installed to the home screen or dock.
- Shared Workers and Broadcast Channels
-
Coordination APIs allow applications to save memory and processing power (albeit, most often in desktop and tablet form-factors).
getInstalledRelatedApps()
-
Helps developers avoid prompting users for permissions that might be duplicative with apps already on the system. Particularly important for avoiding duplicated push notifications.
- Background Sync
-
A tool for reliably sending data — for example, chat and email messages — in the face of intermittent network connections.
- Background Fetch API
-
Allows applications to upload and download bulk media efficiently with progress indicators and controls. Important for reliably syncing playlists of music or videos for offline or synchronising photos/media for sharing.
- Periodic Background Sync
-
Helps applications ensure they have fresh content to display offline in a battery and bandwidth-sensitive way.
- Web Share Target
-
Allows installed web apps to receive sharing intents via system UI, enabling chat and social media apps to help users post content more easily.
The list of missing, foundational APIs for media, social, e-commerce, 3d apps, and games is astonishing. Essential apps in the most popular categories in the App Store are impossible to attempt on the web on iOS because of feature gaps Apple has created and perpetuates.
Device APIs: The Final Frontier #
An area where browsers makers disagree fervently, but where Chromium-based browsers have forged ahead (Chrome, Edge, Samsung Internet, Opera, UC, etc.) is access to hardware devices. While not essential to most "traditional" web apps, these features are foundational for vibrant categories like education and creative music applications. iOS Safari supports none of them today, while Chromium browsers on other OSes enable these apps on the web:
- Web Bluetooth
-
Allows Bluetooth Low Energy devices to safely communicate with web apps, eliminating the need to download heavyweight applications to configure individual IoT devices.
- Web MIDI
-
Enables creative music applications on the web, including synthesisers, mixing suites, drum machines, and music recording.
- Web USB
-
Provides safe access to USB devices from the web, enabling new classes of applications in the browser from education to software development and debugging.
- Web Serial
-
Supports connections to legacy devices. Particularly important in industrial, IoT, health care, and education scenarios.
Web Serial, Web Bluetooth, and Web USB enable educational programming tools to help students learn to program physical devices, including LEGO.
Independent developer Henrik Jorteg has written at length about frustration stemming from an inability to access these features on iOS, and has testified to the way they enable lower cost development. The lack of web APIs on iOS isn't just a frustration for developers. It drives up the prices of goods and services, shrinking the number of organisations that can deliver them.
- Web HID
-
Enables safe connection to input devices not traditionally supported as keyboards, mice, or gamepads.
This API provides safe access to specialised features of niche hardware over a standard protocol they already support without proprietary software or unsafe native binary downloads.
- Web NFC
-
Lets web apps safely read and write NFC tags, e.g. for tap-to-pay applications.
- Shape Detection
-
Unlocks platform and OS provided capabilities for high-performance recognition of barcodes, faces, text in images and video.
Important in videoconferencing, commerce, and IoT setup scenarios.
- Generic Sensors API
-
A uniform API for accessing sensors standard in phones, including Gyroscopes, Proximity sensors, Device Orientation, Acceleration sensors, Gravity sensors, and Ambient Light detectors.
Each entry in this inexhaustive list can block entire classes of applications from credibly being possible on the web. The real-world impact is challenging to measure. Weighing up the deadweight losses seems a good angle for economists to investigate. Start-ups not attempted, services not built, and higher prices for businesses forced to develop native apps multiple times could, perhaps, be estimated.
Incongruous #
The data agree: Apple's web engine consistently trails others in both compatibility and features, resulting in a large and persistent gap with Apple's native platform.
Apple wishes us to accept that:
- It is reasonable to force iOS browsers to use its web engine, leaving iOS on the trailing edge.
- The web is a viable alternative on iOS for developers unhappy with App Store policies.
One or the other might be reasonable. Together? Hmm.
Parties interested in the health of the digital ecosystem should look past Apple's claims and focus on the differential pace of progress.
Full disclosure: for the past twelve years I have worked on Chromium at Google, spanning both the pre-fork era where potential features for Chrome and Safari were discussed within the WebKit project, as well as the post-fork epoch. Over this time I have led multiple projects to add features to the web, some of which have been opposed by Safari engineers.
Today, I lead Project Fugu, a collaboration within Chromium that is directly responsible for the majority of the device APIs mentioned above. Microsoft, Intel, Google, Samsung, and others are contributing to this work, and it is being done in the open with the hope of standardisation, but my interest in its success is large. My front-row seat allows me to state unequivocally that independent software developers are clamouring for these APIs and are ignored when they request support for them from Apple. It is personally frustrating to be unable to deliver these improvements for developers who wish to reach iOS users — which is all developers. My interests and biases are plain.
Previously, I helped lead the effort to develop Service Workers, Push Notifications, and PWAs over the frequent and pointed objections of Apple's engineers and managers. Service Worker design was started as a collaboration between Google, Mozilla, Samsung, Facebook, Microsoft, and independent developers looking to make better, more reliable web applications. Apple only joined the group after other web engines had delivered working implementations. The delay in availability of Service Workers (as well as highly-requested follow-on features like Navigation Preload) for iOS users and developers interested in serving them well, likewise, carries an undeniable personal burden of memory.
iOS is unique in disallowing the web from participating in its only app store. macOS's built-in App Store has similar anti-web terms, but macOS allows multiple app stores (e.g. Steam and the Epic Store), along with real browser choice.
Android and Windows directly include support for web apps in their default stores, allow multiple stores, and facilitate true browser choice. ↩︎
Failing adequate staffing for the Safari and WebKit teams, we must insist that Apple change iOS policy to allow competitors to safely fill the gaps that Apple's own skinflint choices have created. ↩︎
Claims that I (or other Chromium contributors) would happily see engine homogeneity could not be more wrong. ↩︎
Some commenters appear to confuse unlike hardware for differences in software. For example, an area where Apple is absolutely killing it is CPU design. Resulting differences in Speedometer scores between flagship Android and iOS devices are demonstrations of Apple's domineering lead in mobile CPUs.
A-series chips have run circles around other ARM parts for more than half a decade, largely through gobsmacking amounts of L2/L3 cache per core. Apple's restrictions on iOS browser engine choice have made it difficult to demonstrate software parity. Safari doesn't run on Android, and Apple won't allow Chromium on iOS.
Thankfully, the advent of M1 Macs makes it possible to remove hardware differences from comparisons. For more than a decade, Apple has been making tradeoffs and unique decisions in cache hierarchy, branch prediction, instruction set, and GPU design. Competing browser makers are just now starting to explore these differences and adapt their engines to take full advantage of them.
As that is progressing, the results are coming back into line with the situation on Intel: Chromium is roughly as fast, and in many cases much faster, than WebKit.
The lesson for performance analysis is, as always, that one must always double-and-triple-check to ensure you actually measure what you hope to. ↩︎
Ten years ago, trailing-edge browsers were largely the detritus of installations that could not (or would not) upgrade. The relentless march of auto-updates has largely removed this hurdle. The residual set of salient browser differences in 2021 is the result of some combination of:
- Market-specific differences in browser update rates; e.g., emerging markets show several months of additional lag between browser release dates and full replacement
- Increasingly rare enterprise scenarios in where legacy browsers persist (e.g., IE11)
- Differences in feature support between engines
As other effects fade away, the last one comes to the fore. Auto-updates don't do as much good as they could when the replacement for a previous version lacks features developers need. Despite outstanding OS update rates, iOS undermines the web at large by projecting the deficiencies of WebKit's leading-edge into every browser on every iOS device. ↩︎
Perhaps it goes without saying, but the propensity for Firefox/Gecko to implement features with higher quality than Safari/WebKit is a major black eye for Apple.
A scrappy Open Source project without ~$200 billion in the bank is doing what the world's most valuable computing company will not: investing in browser quality and delivering a more compatible engine across more OSes and platforms than Apple does.
This should be reason enough for Apple to allow Mozilla to ship Gecko on iOS. That they do not is all the more indefensible for the tax it places on web developers worldwide. ↩︎
The data captured by MDN Browser Compatibility Data Respository and the caniuse database is often partial and sometimes incorrect.
Where I was aware they were not accurate — often related to releases in which features first appeared — or where they disagreed, original sources (browser release notes, contemporaneous blogs) have been consulted to build the most accurate picture of delays.
The presence of features in "developer previews", beta branches, or behind a flag that users must manually flip have not been taken into account. This is reasonable based on several concerns beyond the obvious: that developers cannot count on the feature when it is not fully launched, mooting any potential impact on the market:
- Some features linger for many years behind these flags (e.g. WebGL2 in Safari).
- Features not yet available on release branches may still change in their API shape, meaning that developers would be subject to expensive code churn and re-testing to support them in this state.
- Browser vendors universally discourage users from enabling experimental flags manually
Competing engines led WebKit on dozens of features not included in this list because of the 3+ year lag cut-off.
The data shows that, as a proportion of features landed in a leading vs. trailing way, it doesn't much matter which timeframe one focuses on. The proportion of leading/lagging features in WebKit remains relatively steady. One reason to omit shorter time periods is to reduce the impact of Apple's lethargic feature release schedule.
Even when Apple's Tech Preview builds gain features at roughly the same time as Edge, Chrome, or Firefox's Beta builds, they may be delayed in reaching users (and therefore becoming available to developers) because of the uniquely slow way Apple introduces new features. Unlike leading engines that deliver improvements every six weeks, the pace of new features arriving in Safari is tied to Apple's twice-a-year iOS point release cadence. Prior to 2015, this lag was often as bad as a full year. Citing only features with a longer lag helps to remove the impact of such release cadence mismatch effects to the benefit of WebKit.
It is scrupulously generous to Cupertino's case that features with a gap shorter than three years were omitted. ↩︎
One effect of Apple's forced web engine monoculture is that, unlike other platforms, issues that affect WebKit impact every other browser on iOS too.
Not only do developers suffer an unwelcome uniformity of quality issues, users are impacted negatively when security issues in WebKit create OS-wide exposure to problems that can only be repaired at the rate OS updates are applied. ↩︎
The three-year delay in Apple implementing Pointer Events for iOS is in addition to delays due to Apple-generated licensing drama within the W3C regarding standardisation of various event models for touch screen input. ↩︎
During the drafting of this post, iOS 14.5 was released and with it, Safari 14.1.
In a bit good-natured japery, Apple initially declined to provide release notes for web platform features in the update.
In the days that followed, belated documentation included a shocking revelation: against all expectations, iOS 14.5 had brought WASM Threads! The wait was over! WASM Threads for iOS were entirely unexpected due to the distance WebKit would need to close to add either true Site Isolation or new developer opt-in mechanisms to protect sensitive content from side-channel attacks on modern CPUs. Neither seemed within reach of WebKit this year.
The Web Assembly community was understandably excited and began to test the claim, but could not seem to make the feature work as hoped.
Soon after, Apple updated it's docs and provided details on what was, in fact, added. Infrastructure that will eventually be critical to a WASM Threading solution in WebKit was made available, but it's a bit like an engine on a test mount: without the rest of the car, it's beautiful engineering without the ability to take folks where they want to go.
WASM Threads for iOS had seen their shadow and six more months of waiting (minimum) are predicted. At least we'll have one over-taxed CPU core to keep us warm. ↩︎
It's perverse that users and developers everywhere pay a tax for Apple's under-funding of Safari/WebKit development, in effect subsidising the world's wealthiest firm. ↩︎
Safari uses a private API not available to other iOS browsers for installing web apps to the home screen.
Users who switch their browser on iOS today are, perversely, less able to make the web a more central part of their computing life, and the inability for other browsers to offer web app installation creates challenges for developers who must account for the gap and recommend users switch to Safari in order to install their web experience. ↩︎
Reactive Data With Modern JavaScript
A silly little PWA has been brewing over the past couple of weekends to make a desktop-compatible version of a mobile-only native app using Web Bluetooth.
I'm incredibly biased of course, but the Project Fugu 🐡 APIs are a lot of fun. There's so much neat stuff we can build in the browser now that HID, Serial, NFC, Bluetooth and all the rest are available. It has been a relaxing pandemic distraction to make time to put them through their paces, even if developing them is technically the day job too.
Needless to say, browsers that support Web Bluetooth are ultra-modern[1]. There's no point in supporting legacy browsers that don't have this capability, which means getting to use all the shiny new stuff; no polyfills or long toolchains. Fun!
In building UI with lit-html
, a question arises about how to trigger rendering without littering code with calls to render(...)
. Lots of folks use data store libraries that provide a callback life-cycle, but they seem verbose. I'm also not keen on the FP marketing jargon that serves to remove directness of action and make simple things more complicated than they are complex.
Being reactive to data changes without a passel of callbacks or tedious conventions is, therefore, appealing. What we need to do this is:
- An object that can be an event source for listeners
- Some way to be notified of data changes
That's really it! Before modern runtimes, we needed verbose, explicit API surface. But it's 2021, and we can do more with less now thanks to Proxies
and subclassable EventTarget
.
Hewing to the "data down, events up" convention of Web Components, here's a little function my small app is using instead of a "state management" library:
// proxyFor.js
let objToProxyMap = new WeakMap();
let shouldNotProxy = (thing) => {
return (
(thing === null) ||
(typeof thing !== "object") ||
(objToProxyMap.has(thing))
);
};
export let proxyFor = (thing, // to wrap
eventTarget, // defaults to `thing`
// only watch existing properties?
currentOnly,
// data path, advanced use only
path=[]) => {
// If not an object, or already proxied, bail
if (shouldNotProxy(thing)) { return thing; }
let dataProperties = currentOnly ?
new Set(Object.keys(thing)) : null;
let p = new Proxy(thing, {
get: function(obj, prop, receiver) {
let value = Reflect.get(...arguments);
if (objToProxyMap.has(value)) {
return objToProxyMap.get(value);
}
return (
(typeof value === "function") &&
(prop in EventTarget.prototype)
) ?
value.bind(obj) : // avoid `this` confusion
proxyFor(value, // handle object trees
eventTarget,
currentOnly,
path.concat(prop));
},
set: function(obj, prop, value) {
if (!dataProperties ||
dataProperties.has(prop)) {
let evt = new CustomEvent("datachange",
{ bubbles: true, cancelable: true, }
);
// Could send through `details`, but meh
evt.oldValue = thing[prop];
evt.value = value;
evt.dataPath = path.concat(prop);
evt.property = prop;
eventTarget.dispatchEvent(evt);
}
// TODO: bookeeping to avoid potential leaks
obj[prop] = value;
return true;
}
});
eventTarget = eventTarget || p;
objToProxyMap.set(thing, p);
return p;
};
One way to use this is to mix it in with a root object that is itself an EventTarget
:
// AppObject.js
import { proxyFor } from "./proxyFor.js";
// In modern runtimes, EventTarget is subclassable
class DataObject extends EventTarget {
aNumber = 0.0;
aString = "";
anArray = [];
// ...
constructor() {
super();
// Cheeky and slow, but works
return proxyFor(this);
}
}
export class AppObject extends DataObject {
// We can handle inherited properties mixed in
counter = 0;
doStuff() {
this.counter++;
this.aNumber += 1.1;
this.aString = (this.aNumber).toFixed(1) + "";
this.anArray.length += 1; // Handled
}
}
The app creates instances of AppObject
and subscribes to datachange
events to drive UI updates once per frame:
<script type="module">
import { html, render } from "lit-html";
import { AppObject } from "./AppObject.js";
let app = new AppObject();
let mainTemplate = (obj) => {
return html`
<pre>
A Number: ${obj.aNumber}
A String: "${obj.aString}"
Array.length: ${obj.anArray.length}
Counter: ${obj.counter}
</pre>
<button @click=${() => { obj.counter++; }}>+</button>
<button @click=${() => { obj.counter--; }}>-</button>
`;
};
// Debounce to once per rAF
let updateUI = (() => {
let uiUpdateId;
return function (obj, tmpl, node, evt) {
if (!node) { return; }
if (uiUpdateId) {
cancelAnimationFrame(uiUpdateId);
uiUpdateId = null;
}
uiUpdateId = requestAnimationFrame(() => {
// Logs/renders once per frame
console.log(evt.type, Date.now());
render(tmpl(obj), node);
});
}
})();
// Wire the template to be re-rendered from data
let main = document.querySelector("main");
app.addEventListener("datachange", (evt) => {
// Not debounced, called in quick succession by
// setters in `doStuff`
updateUI(app, mainTemplate, main, evt);
});
setInterval(app.doStuff.bind(app), 1000);
</script>
<!-- ... -->
This implementation debounces rendering to once per requestAnimationFrame()
, which can be extended/modified however one likes.
There are other caveats to this approach, some of which veritably leap off the page:
- There are lurking memory leaks which a production app would want to fix
- Private class properties and methods don't work, either on
DataObj
or subclasses. Caveat emptor! - If you want something like "middleware" from other (less idiomatic) systems, you'll need to extend
proxyFor
ever so slightly - The performance of getters on objects wrapped this way is likely to not be great compared to "regular" objects, but the wrapping is lazy
- The
dataPath
and own-property options are a bit of a flourish and a real system could easily do without to save memory - There are many edge cases the proxies don't catch
In general, we win idiomatic object syntax for most operations at the expense of breaking private properties, but for a toy, it's a fun start.
Here's another way to use this, routing updates on a data object through a DOM node:
<!-- A DOM-driven reactive incrementer -->
<script type="module">
import { proxyFor } from "./proxyFor.js";
let byId = window.byId =
document.getElementById.bind(document);
let count = byId("count");
count.data = proxyFor({ value: 0 }, count);
// Other parts of the document can listen for this
count.addEventListener("datachange", (evt) => {
count.innerText = evt.value;
});
</script>
<h3>
Current count:
<span id="count">0</span>
</h3>
<button id="increment"
onclick="byId('count').data.value++">
increment
</button>
<button id="decrement"
onclick="byId('count').data.value--">
decrement
</button>
You can try a silly little test page here, DOM incrementer, or check out the Svelte Store API implemented with getProxy
underneath.
Sorry iOS users, there are no modern browsers available for your platform because Apple is against meaningful browser-choice, no matter what colour lipstick vendors are allowed to put on WebKit. ↩︎
The Mobile Performance Inequality Gap, 2021
TL;DR: A lot has changed since 2017 when we last estimated a global baseline resource budget of 130-170KiB per-page. Thanks to progress in networks and browsers (but not devices), a more generous global budget cap has emerged for sites constructed the "modern" way. We can now afford ~100KiB of HTML/CSS/fonts and ~300-350KiB of JS (gzipped). This rule-of-thumb limit should hold for at least a year or two. As always, the devil's in the footnotes, but the top-line is unchanged: when we construct the digital world to the limits of the best devices, we build a less usable one for 80+% of the world's users.
Way back in 2016, I tried to raise the alarm about the causes and effects of terrible performance for most users of sites built with popular frontend tools. The negative effects were particularly pronounced in the fastest-growing device segment: low-end to mid-range Android phones.
Bad individual experiences can colour expectations of the entire ecosystem. Your company's poor site performance can manifest as lower engagement, higher bounce rates, or a reduction in conversions. While this local story is important, it isn't the whole picture. If a large enough proportion of sites behave poorly, performance hysteresis may colour user views of all web experiences.
Unless a site is launched from the home screen as a PWA, sites are co-mingled. Pages are experienced as a series of taps, flowing effortlessly across sites; a river of links. A bad experience in the flow is a bad experience of the flow, with constituent parts blending together.[1]
If tapping links tends to feel bad...why keep tapping? It's not as though slow websites are the only way to access information. Plenty of native apps are happy to aggregate content and serve it up in a reliably fast package, given half a chance. The consistency of those walled gardens is a large part of what the mobile web is up against — and losing to.
Poor performance of sites that link to and from yours negatively impacts engagement on your site, even if it is consistently snappy. Live by the link, die by the link.
The harmful business impact of poor performance is constantly re-validated. Big decreases in performance predictably lead to (somewhat lower) decreases in user engagement and conversion. The scale of the effect can be deeply situational or hard to suss out without solid metrics, but it's there.
Variance contributes another layer of concern; high variability in responsiveness may create effects that perceptually dominate averages, or even medians. If 9 taps in 10 respond in 100ms, but every 10th takes a full second, what happens to user confidence and engagement? These deep-wetware effects and their cross-origin implications mean that your site's success is, partially, a function of the health of the commons.
From this perspective, it's helpful to consider what it might take to set baselines that can help ensure minimum quality across link taps, so in 2017 I followed up with a post sketching a rubric for thinking about a global baseline.
The punchline:
The default global baseline is a ~$200 Android device on a 400Kbps link with a 400ms round-trip-time ("RTT"). This translates into a budget of ~130-170KB of critical-path resources, depending on composition — the more JS you include, the smaller the bundle must be.
A $200USD device at the time featured 4-8 (slow, in-order, low-cache) cores, ~2GiB of RAM, and pokey MLC NAND flash storage. The Moto G4, for example.
The 2017 baseline represented a conservative, but evidence-driven, interpretation of the best information I could get regarding network performance, along with trend lines regarding device shipment volumes and price points.[2] Getting accurate global information that isn't artificially reduced to averages remains an ongoing challenge. Performance work often focuses on high percentile users (the slowest), after all.
Since then, the metrics conversation has moved forward significantly, culminating in Core Web Vitals, reported via the Chrome User Experience Report to reflect the real-world experiences of users.
Devices and networks have evolved too:
An update on mobile CPUs and the Performance Inequality Gap:
Mid-tier Android devices (~$300) now get the single-core performance of a 2014 iPhone and the multi-core perf of a 2015 iPhone.
The cheapest (high volume) Androids perform like 2012/2013 iPhones, respectively. twitter.com/slightlylate/status/1139684093602349056
Meanwhile, developer behaviour offers little hope:
A silver lining on this dark cloud is that mobile JavaScript payload growth paused in 2020. How soon will low-end and middle-tier phones be able to handle such chonky payloads? If the median site continued to send 3x the recommended amount of script, when would the web start to feel usable on most of the world's devices?
Here begins our 2021 adventure.
Hard Reset #
To update our global baseline from 2017, we want to update our priors on a few dimensions:
- The evolved device landscape
- Modern network performance and availability
- Advances in browser content processing
Content Is Dead, Long Live Content #
Good news has consistently come from the steady pace of browser progress. The single largest improvements visible in traces come from improved parsing and off-thread compilation of JavaScript. This step-change, along with improvements to streaming compilation, has helped to ensure that users are less likely to notice the unreasonably-sized JS payloads that "modern" toolchains generate more often than not. Better use of more cores (moving compilation off-thread) has given sites that provide HTML and CSS content a fighting chance of remaining responsive, even when saddled with staggering JS burdens.
A steady drumbeat of improvements have contributed to reduced runtime costs, too, though I fear they most often create an induced demand effect.
The residual main-thread compilation, allocation, and script-driven DOM/Layout tasks pose a challenge for delivering a good user experinece.[3] As we'll see below, CPUs are not improving fast enough to cope with frontend engineers' rosy resource assumptions. If there is unambiguously good news on the tooling front, multiple popular tools now include options to prevent sending first-party JS in the first place (Next.js, Gatsby), though the JS community remains in stubborn denial about the costs of client-side script. Hopefully, toolchain progress of this sort can provide a more accessible bridge as we transition costs to a reduced-script-emissions world.
4G Is A Miracle, 5G Is A Mirage #
The 2017 baseline post included a small model for thinking about how to think about how various factors of a page's construction influence the likelihood of hitting a 5-second first load goal.
The hard floor of that model (~1.6s) came from the contributions DNS, TCP/IP, and TLS connection setup over a then-reasonable 3G network baseline, leaving only 3400ms to work with, fighting Nagle and weak CPUs the whole way. Adding just one extra connection to a CDN for a critical path resource could sink the entire enterprise. Talk about a hard target.
Four years later, has anything changed? I'm happy to report that it has. Not as much as we'd like, of course, but the worldwide baseline has changed enormously. How? Why?
India has been the epicentre of smartphone growth in recent years, owing to the sheer size of its market and an accelerating shift away from feature phones which made up the majority of Indian mobile devices until as late as 2019. Continued projections of double-digit market growth for smartphones, on top of a doubling of shipments in the past 5 years, paint a vivid picture.
Key to this growth is the effect of Reliance Jio's entry into the carrier market. Before Jio's disruptive pricing and aggressive rollout, data services in India were among the most expensive in the world relative to income and heavily reliant on 3G outside wealthier "tier 1" cities. Worse, 3G service often performed like 2G in other markets, thanks to under-provisioning and throttling by incumbent carriers.
In 2016, Jio swept over the subcontinent like a monsoon dropping a torrent of 4G infrastructure and free data rather than rain.
Competing carriers responded instantly, dropping prices aggressively, leading to reductions in per-packet prices approaching 95%.
Jio's blanket 4G rollout blitzkrieg shocked incumbents with a superior product at an unheard-of price, forcing the entire market to improve data rates and coverage. India became a 4G-centric market sometime in 2018.
If there's a bright spot in our construction of a 2021 baseline for performance, this is it. We can finally upgrade our assumptions about the network to assume slow-ish 4G almost everywhere (pdf).
5G looks set to continue a bumpy rollout for the next half-decade. Carriers make different frequency band choices in different geographies, and 5G performance is heavily sensitive to mast density, which will add confusion for years to come. Suffice to say, 5G isn't here yet, even if wealthy users in a few geographies come to think of it as "normal" far ahead of worldwide deployment[4].
Hardware Past As Performance Prologue #
Whatever progress runtimes and networks have made in the past half-decade, browsers are stubbornly situated in the devices carried by real-world users, and the single most important thing to understand about the landscape of devices your sites will run on is that they are not new phones.
This makes some intuitive sense: smartphones are not in their first year (and haven't been for more than a dozen years), and most users do not replace their devices every year. Most smartphone sales today are replacements (that is, to users who have previously owned a smartphone), and the longevity of devices continues to rise.
The worldwide device replacement average is now 33 months. In markets near smartphone saturation, that means we can expect the median device to be nearly 18 months old. Newer devices continue to be faster for the same dollar spent. Assuming average selling prices (ASPs) remaining in a narrow year-on-year band in most geographies[5], a good way to think of the "average phone" as being the average device sold 18 months ago. ASPs, however, have started to rise, making my prognostications from 2017 only technically correct:
The true median device from 2016 sold at about ~$200 unlocked. This year's median device is even cheaper, but their performance is roughly equivalent. Expect continued performance stasis at the median for the next few years. This is part of the reason I suggested the Moto G4 last year and recommend it or the Moto G5 Plus this year.
Median devices continue to be different from averages, but we can squint a little as we're abstracting over multi-year cohorts. The worldwide ASP 18 months ago was ~$300USD, so the average performance in the deployed fleet can be represented by a $300 device from mid-2019. The Moto G7 very much looks the part.
Compared to devices wealthy developers carry, the performance is night and (blinding) day. However shocking a 6x difference in single-thread CPU performance might be, it's nothing compared to where we should be setting the global baseline: the P75+ device. Using our little mental model for device age and replacement, we can naively estimate what the 75th percentile (or higher) device could be in terms of device price + age, either by tracking devices at half the ASP at half the replacement age, or by looking at ASP-priced devices 3/4 of the way through the replacement cycle. Today, either method returns a similar answer.
This is inexact for dozens of reasons, not least of all markets with lower ASPs not yet achieving smartphone saturation. Using a global ASP as a benchmark can further mislead thanks to the distorting effect of ultra-high-end prices rising while shipment volumes stagnate. It's hard to know which way these effects cut when combined, so we're going to make a further guess: we'll take half the average price and note how wrong this likely is[6].
So what did $150USD fetch in 2019?
Say hello to the Moto E6! This 2GiB RAM, Android 9 stalwart features the all-too classic lines of a Quad-core A53 (1.4GHz, small mercies) CPU, tastefully presented in a charming 5.5" package.
If those specs sound eerily familiar, it's perhaps because they're identical to 2016's $200USD Moto G4, all the way down to the 2011-vintage 28nm SoC process node used to fab the chip's anemic, 2012-vintage A53 cores. There are differences, of course, but not where it counts.
You might recall the Moto G4 as a baseline recommendation from 2016 for forward-looking performance work. It's also the model we sent several dozen of to Pat Meenan — devices that power webpagetest.org/easy
to this day. There was no way to know that the already-ageing in-order, near-zero-cache A53 + too-hot-to-frequency-scale 28nm process duo would continue to haunt us five year on. Today's P75 devices tell the story of the yawning Performance Inequality Gap: performance for those at the top end continues to accelerate away from the have-nots who are perpetually stuck with 2014's hardware.
The good news is that chip progress has begun to move, if glacially, in the past couple of years. 28nm chips are being supplanted at the sub-$200USD price-point by 14nm and even 11nm parts[7]. Specs are finally improving quickly at the bottom of the market now, with new ~$135USD devices sporting CPUs that should give last year's mid-range a run for its money.
Mind The Gap #
Regardless, the overall story for hardware progress remains grim, particularly when we recall how long device replacement cycles are:
Recall that single-core performance most directly translates into speed on the web. Android-ecosystem SoC performance is, in a word, disastrous.
How bad is it?
We can think about each category in terms of years behind contemporary iPhone releases:
- 2020's high-end Androids sport the single-core performance of an iPhone 8, a phone released in Q3'17
- mid-priced Androids were slightly faster than 2014's iPhone 6
- low-end Androids have finally caught up to the iPhone 5 from 2012
You're reading that right: single core Android performance at the low end is both shockingly bad and dispiritingly stagnant.
The multi-core performance shows the same basic story: iOS's high-end and the most expensive Androids are pulling away from the volume-shipment pack. The fastest Androids predictably remain 18-24 months behind, owing to cheapskate choices about cache sizing by Qualcomm, Samsung Semiconductor, MediaTek, and the rest. Don't pay a lot for an Android-shaped muffler.
Chip design choices and silicon economics are the defining feature of the still-growing Performance Inequality Gap.
Things continue to get better and better for the wealthy, leaving the rest behind. When we construct a digital world to the limits of the best devices, the worse an experience we build, on average, for those who cannot afford iPhones or $800 Samsung flagships.
It is perhaps predictable that, instead of presenting a bulwark against stratification, technology outcomes have tracked society's growing inequality. A yawning chasm of disparities is playing out in our phones at the same time it has come to shape our economic and political lives. It's hard to escape thinking they're connected.
Developers, particularly in Silicon Valley firms, are definitionally wealthy and enfranchised by world-historical standards. Like upper classes of yore, comfort ("DX") comes with courtiers happy to declare how important comfort must surely be. It's bunk, or at least most of it is.
As frontenders, our task is to make services that work well for all, not just the wealthy. If improvements in our tools or our comfort actually deliver improvements in that direction, so much the better. But we must never forget that measurable improvement for users is the yardstick.
Instead of measurement, we seem to suffer a proliferation of postulates about how each new increment of comfort must surely result in better user experiences. But the postulates are not tied to robust evidence. Instead, experience teaches that it's the process of taking care to attend to those least-well-off that changes outcomes. Trickle-down user experience from developer-experience is, in 2021, as fully falsified as the Laffer Curve. There's no durable substitute for compassion.
A 2021 Global Baseline #
It's in that spirit that I find it important to build to a strawperson baseline device and network target. Such a benchmark serves as a stand-in until one gets situated data about a site or service's users, from whom we can derive informed conclusions about user needs and appropriate resource budgets.
A global baseline is doubly important for generic frameworks and libraries, as they will be included in projects targeted at global-baseline users, whether or not developers have that in mind. Tools that cannot fit comfortably within, say, 10% of our baseline budget should be labelled as desktop-only or replaced in our toolchains. Many popular tools over the past 5 years have been blithely punctured these limits relative to the 2017 baseline, and the results have been predictably poor.
Given all the data we've slogged through, we can assemble a sketch of the browsers, devices, and networks today's P50 and P75 users will access sites from. Keep in mind that this baseline might be too optimistic, depending on your target market. Trust but verify.
OpenSignal's global report on connection speeds (pdf) suggest that WebPageTest's default 4G configuration (9Mbps w/ 170ms RTT) is a reasonable stand-in for the P75 network link. WPT's LTE configuration (12Mbps w/ 70ms RTT) may actually be too conservative for a P50 estimate today; something closer to 25Mbps seems to approach the worldwide median today. Network progress has been astonishing, particularly channel capacity (bandwidth). Sadly, data on latency is harder to get, even from Google's perch, so progress there is somewhat more difficult to judge.
As for devices, the P75 situation remains dire and basically unchanged from the baseline I suggested more than five years ago. Slow A53 cores — a design first introduced nearly a decade ago — continue to dominate the landscape, differing only slightly in frequency scaling (through improved process nodes) and GPU pairing. The good news is that this will change rapidly in the next few years. The median new device is already benefiting; modern designs and fab processes are powering a ramp-up in performance for a $300USD device, particularly in multi-core workloads. But the hardware future is not evenly distributed, and web workloads aren't heavily parallel.
Astonishingly, I believe this means that for at least the next year we should consider the venerable Moto G4 to still be our baseline. If you can't buy one new, the 2020-vintage Moto E6 does a stirring rendition of the classic; it's basically the same phone, after all. The Moto E7 Plus is a preview of better days to come, and they can't arrive soon enough.
"All models are wrong, but some are useful." #
Putting it all together, where does that leave us? Back-of-the napkin, and accepting our previous targets of 5 seconds for first load and two seconds for second load, what can we afford?
Plugging the new P75 numbers in, the impact of network improvements are dramatic. Initial connection setup times are cut in half, dropping from 1600ms to 700ms, which frees up significant headroom to transmit more data in the same time window. The channel capacity increase to 9Mbps is enough to transmit more than 4MiB (megabytes) of content over a single connection in our 5 second window (1 Byte == 8 bits, so 9Mbps is just over ~1MBps). Of course, if most of this content is JavaScript, it must be compiled and run, shrinking our content window back down significantly. Using a single connection (thanks to the magic of HTTP/2), a site composed primarily of JavaScript loaded perfectly can, in 2021, weigh nearly 600KiB and still hit the first-load bar.
That's a very fine point to balance on, though. A single additional TCP/TLS connection setup in the critical path reduces the amount by 100KiB (as do subsequent critical-path network handshakes). Serialized requests for data, high TTFBs, and ever-present font serving issues make these upper limits far higher than what a site shooting for consistently good load times should be aiming for. In practice, you can't actually afford 600KiB of content if your application is build in the increasingly popular "single page app" style.
As networks have improved, client-side CPU time now dominates script download, adding further variability. Not all script parses and runs in the same amount of time for the same size. The contribution of connection setup also looms large.
If you want to play with the factors and see how they contribute, you can try this slightly updated version of 2017's calculator (launch a larger version in a new tab here):
This model of browser behaviour, network impacts, and device processing time is, of course, wrong for your site. It could be wrong in ways big and small. Perhaps your content is purely HTML, CSS, and images which would allow for a much higher budget. Or perhaps your JS isn't visually critical path, but still creates long delays because of delayed fetching. And nobody, realistically, can predict how much main-thread work a given amount of JS will cause. So it's an educated guess with some padding built in to account for worst-case content-construction (which is more common than you or I would like to believe).
Conservatively then, assuming at least 2 connections need to be set up (burning ~1400 of our 5000ms), and that script resources are in the critical path, the new global baseline leaves space for ~100KiB (gzipped) of HTML/CSS/fonts and 300-350KiB of JavaScript on the wire (compressed). Critical path images for LCP, if any, need to subtract their transfer size from one of these buckets, with trades against JS bundle size potentially increasing total download budget, but the details matter a great deal. For "modern" pages, half a megabyte is a decent hard budget.
Is that a lot? Well, 2-4x our previous budget, and as low-end CPUs finally begin to get faster over the next few years, we can expect it to loosen further for well-served content.
Sadly, most sites aren't perfectly served or structured, and most mobile web pages send more than this amount of JS. But there's light at the end of the tunnel: if we can just hold back the growth of JS payloads for another year or two — or reverse the trend slightly — we might achieve a usable web for the majority of the world's users by the middle of the decade.
Getting there involves no small amount of class traitorship; the frontend community will need to value the commons over personal comfort for a little while longer to ease our ecosystem back toward health. The past 6 years of consulting with partner teams has felt like a dark remake of Groundhog Day, with a constant parade of sites failing from the get-go thanks to Framework + Bundler + SPA architectures that are mismatched to the tasks and markets at hand. Time (and chips) can heal these wounds if we only let it. We only need to hold the line on script bloat for a few years for devices and networks to overtake the extreme, unconscionable excesses of the 2010's.
Baselines And Budgets Remain Vital #
I mentioned near the start of this too-long piece that metrics better than Time-to-Interactive have been developed since 2017. The professionalism with which the new Core Web Vitals (CWV) have been developed, tested, and iterated on inspires real confidence. CWV's constituent metrics and limits work to capture important aspects of page experiences as users perceive them on real devices and networks.
As RUM (field) metrics rather than bench tests, they represent ground truth for a site, and are reported in aggregate by the Chrome User Experience Report (CrUX) pipeline. Getting user-population level data about the performance of sites without custom tooling is a 2016-era dream come true. Best of all, the metrics and limits can continue to evolve in the future as we gain confidence that we can accurately measure other meaningful components of a user's journey.
These field metrics, however valuable, don't yet give us enough information to guide global first-best-guess making when developing new sites. Continuing to set performance budgets is necessary for teams in development, not least of all because conformance can be automated in CI/CD flows. CrUX data collection and first-party RUM analytics of these metrics require live traffic, meaning results can be predicted but only verified once deployed.
Both budgets and RUM analytics will continue to be powerful tools in piercing the high-performance privilege bubble that surrounds today's frontend discourse. Grounding our choices against a baseline and measuring how it's going for real users are the only proven approaches I know that reliably help teams take the first step toward frontend's first and highest calling: a web that's truly for everyone.
One subtle effect of browser attempts to avoid showing users blank white screens is that optimisations meant to hide latency may lead users to misattribute which sites are slow.
Consider a user tapping a link away from your site to a slow, cross-origin site. If the server takes a long time to respond, the referring page will be displayed for as long as it takes for the destination server to return with minimally parseable HTML content. Similarly, if a user navigates to your site, but the previous site's
onunload
handlers take a long time to execute (an all too common issue), no matter how fast your site responds, the poor performance of the referring page will make your site appear to load slowly.Even without hysteresis effects, the performance of referrers and destinations can hurt your brand. ↩︎
A constant challenge is the tech world's blindness to devices carried by most users. Tech press coverage is over-fitted to the high-end phones sent for review (to say nothing of the Apple marketing juggernaut), which doesn't track with shipment volumes.
Smartphone markers (who are often advertisers) want attention for the high-end segment because it's where the profits are, and the tech press rarely pushes back. Column inches covering the high-end remains heavily out of proportion to sales by price-point. In a world with better balance, most articles would be dedicated to announcements of mid-range devices from LG and Samsung.
International tech coverage would reflect that, while Apple finally pierced 50% US share for the first time in a decade at the end of 2020, iOS smartphone shipments consistently max out at ~20% worldwide, toiling near 15% most quarters.
There's infinitely more to say about these biases and how they manifest. It's constantly grating that Qualcomm, Samsung Semiconductor, MediaTek, and other Android SoC vendors get a pass (even at the high end) on their terrible price/performance.[7:1]
Were we to suffer an outbreak of tenacious journalism in our tech press, coverage of the devastatingly bad CPUs or the galling OS update rates amongst Android OEMs might be among the early signs. ↩︎
In the worst cases, off-thread compilation has no positive impact on client-side rendered SPAs. If an experience is blocked on a script to begin requesting data or generating markup from it, additional cores can't help. Some pages can be repaired with a sprinkling of
<link rel=preload>
directives.More frequently, "app"-centric architectures promulgated by popular framework starter kits leave teams with immense piles of JS to whittle down. This is expensive, gruelling work in the Webpack mines; not all the "developer experience" folks signed up for.
The React community, in particular, has been poorly served by a lack of guidance, tooling, and support from Facebook. Given the scale of the disaster that has unfolded there, it's shocking that in 2021 (and after recent re-development of the documentation site) there remains no guidance about bundle sizes or performance budgets in React's performance documentation.
Facebook, meanwhile, continues to staff a performance team, sets metrics and benchmarks, and generally exhibits the sort of discipline about bundle sizes and site performance that allows teams to succeed with any stack. Enforced latency and performance budgets tend to have that effect.
FB, in short, knows better. ↩︎
As ever, take the money and introductions VCs offer, but interpret their musings about what "everyone knows" and "how things are" like you might Henry Kissinger's geopolitical advise: toxic, troubling — and above all — unfailingly self-serving. ↩︎
Device Average Selling Price (APSs) are both directionally helpful and deeply misleading.
Global ASPs suffer all the usual issues of averages. A slight increase or decrease in the global ASP can mask extreme variance between geographies, major disparities in volumes, and shifts in prices at different deciles.
We see this misfeature in the wild through press treatment of large headline increases in ASP and iOS share when Android device sales fall even slightly. Instead of digging into the shape of the distribution and relative time series inputs, the press regularly writes this up as the sudden success of some new Apple feature. That's always possible, but a slightly stylised version of these facts more easily pass Occam's test: wealthy users are less sensitive in their technology refresh timing decisions.
This makes intutive sense: in a recession, replacing consumer electronics is a relative luxury, leading the most price-sensitive buyers — overwhelmingly Android users — to push off replacing otherwise functional devices.
Voila! A slight dip in the overwhelming volume of Android purchases appears to bolster both ASP and iOS share since iOS devices are almost universally more expensive. ASP might be a fine diagnostic metric in limited cases, but it always pays to develop a sense for the texture of your data! ↩︎
While the model of median device representation presented here is hand-wavey, I've cross-referenced the results with Google's internal analysis and the model holds up OK. Native app and SDK developers with global reach can use RAM, CPU speeds, and screen DPI to bucket devices into high/medium/low tiers, and RAM has historically been a key signifier of device performance overall. Indeed, some large sites I've worked with have built device model databases to bucket traffic based on system RAM to decide which versions/features to serve.
Our ASP + age model could be confounded by many factors: ASPs deviating more than expected year-on-year, non-linearities in price/performance (e.g., due to a global chip shortage), or sudden changes in device replacement rates. Assuming these factors hold roughly steady, so will the model's utility. Always gut-check the results of a model! Caveat emptor.
In our relatively steady-state, 90+% of active smartphones were sold in the last 4 years. This makes some intuitive sense: batteries degrade, screens crack, and disks fill up. Replacement happens.
If our napkin-back modelling has any bias, it's slightly too generous about device prices (too high) and age (too young) given the outsized historical influence of wealthier markets achieving smartphone saturation ahead of emerging one. Given that a new baseline is being set in an environment where we know the bottom-end will start to rise in terms of performance over the next few years (instead of staying stagnant), this bias, while hard to quantify, doesn't seem deeply problematic. ↩︎
While process nodes are starting to move forward in a hopeful way, the overall design tradeoffs of major Android-ecosystem SoC vendors are not. Recently updated designs for budget parts finally get out-of-order dispatch and better parallelism.
Sadly, the feeble caches they're mated to ensure whatever gains are unlocked by better microarchitectures, plus higher frequencies, go largely to waste. It's not much good to clock a core faster or give it more ability to dispatch ops in if it's only spinning up to a high power state to stall on main memory, 100s of cycles away.
Apple consistently wins benchmarks one both perf and power by throwing die space at the ARBs and caches to feed their hungry, hungry cores (among other improvements to core design). Any vendor with an ARM Architectural License can play this game, but either Qualcomm et al. are too stingy to fork out $200K (or $800K?) for a full license to customise the core of the thing they market, or they don't want to spend more to fund the extra design time or increase mm^2 of die space (both of which might cut into margins).
Whatever the reason, these designs are (literally) hot garbage, dollar-for-dollar, compared to the price/performance of equivalent-generation Apple designs. It's a trickle-down digital divide, on a loop. The folks who suffer are end-users who can't access services and information they need quickly because developers in the privilege bubble remain insulated by wealth from all of these effects, oblivious to the ways that a market failure several layers down the stack keeps them ignorant of their malign impacts on users. ↩︎ ↩︎
Election Season 2020, W3C TAG Edition
Update: Jeffery's own post discussing his run is now up and I recommend reading it if you're in a position to cast a vote in the election.
Two years ago, as I announced that I wouldn't be standing for a fourth term on the W3C's Technical Architecture Group, I wrote that:
Having achieved much of what I hoped when running for the TAG six years ago, it's time for fresh perspectives.
There was no better replacement I could hope for than Alice Boxhall. The W3C membership agreed and elected her to a two year term. Her independence, technical rigour, and dedication to putting users and their needs first have served the TAG and the broader web community exceedingly well since. Sadly, she is not seeking re-election. The TAG will be poorer without her dedication, no-nonsense approach, and depth of knowledge.
TAG service is difficult, time consuming, and requires incredible focus to engage constructively in the controversies the TAG is requested to provide guidance about. The technical breadth required creates a sort of whiplash effect as TAG members consider low-level semantics and high-level design issues across nearly the entire web platform. That anyone serves more than one term is a minor miracle!
While I'm sad Alice isn't running again, my colleague Jeffrey Yasskin has thrown his hat into the ring in one of the most contested TAG elections of recent memory. There aren't many folks who have the temperament, experience, and track record that would make them natural TAG candidates, but Jeffrey is in that rare group and I hope that I can convince you he deserves you organisation's first-choice vote.
Jeffrey and I have worked closely over the past 5 years on challenging projects that have pushed the boundaries of the web's potential forward, but which also posed large risks if done poorly. It was Jeff's involvement that lead to the "chooser model" for several device access APIs, starting with Web Bluetooth.
This model has since been pressed into service multiple times, notably for Chrome's mediation of the Web USB, Web HID, and Web Serial APIs. The introduction of a novel style of chooser solved thorny problems.
Previously, security and feature teams were unable to find a compromise that provided sufficient transparency and control while mitigating abuses inherent in blanket prompts. Jeffrey's innovation created space for compromise and, when combined with other abuse mitigations he helped develop for Bluetooth, was enough to surmount the objections levelled against proposals for (over)broad grants.
Choosers have unlocked educational programming without heavyweight tools, sparked a renaissance in utilities that work everywhere, and is contributing to a reduction in downloads of potentially insecure native programs for controlling IoT devices.
There's so much more to talk about in Jeffrey's background and approach to facilitating structural solutions to thorny problems — from his work to (finally) develop a grounded privacy threat model as the TAG repeatedly requested, to his stewardship of the Web Packaging work the TAG started in 2013 to his deep experience in standards at nearly every level of the web stack — but this post is already too long.
There are a lot of great candidates running for this year's four open seats on the TAG, and while I can imagine many of them serving admirably, not least of all Tess O'Connor and Sanghwan Moon who have been doing great work, there's nobody I'd rather see earn your first-choice vote than Jeffrey Yasskin. He's not one to seek the spotlight for himself, but having worked closely with him, I can assure you that, if elected, he'll earn your trust the way he has earned mine.
Resize-Resilient `content-visibility` Fixes
Update: After hitting a bug related to initial rendering on Android, I've updated the code here and in the snippet to be resilient to browsers deciding (wrongly) to skip rendering the first <article>
in the document.
Update, The Second: After holiday explorations, it turns out that one can, indeed, use contain-intrinsic-size
on elements that aren't laid out yet. Contra previous advice, you absolutely should do that, even if you don't know natural widths yet. Assuming the browser will calculate a width for an <article>
once finally laid out, there's no harm in reserving a narrow but tall place-holder. Code below has been updated to reflect this.
The last post on avoiding rendering work for content out of the viewport with content-visibility
included a partial solution for how to prevent jumpy scrollbars. This approach had a few drawbacks:
- Because
content-visibility: visible
was never removed, layouts inevitably grew slower as more and more content was unfurled by scrolling. - Layouts caused by resize can be particularly slow on large documents as they can cause all text flows to need recalculation. This would not be improved by the previous solution if many articles had been marked
visible
. - Jumping "far" into a document via link or quick scroll could still be jittery.
Much of this was pointed out (if obliquely) in a PR to the ECMAScript spec. One alternative is a "place-keeper" element that grows with rendered content to keep elements that eventually disappear from perturbing scroll position. This was conceptually hacky and I couldn't get it working well.
What should happen if an element in the flow changes its width or height in reaction to the browser lazy-loading content? And what if the scroll direction is up, rather than down?
For a mercifully brief while I also played with absolutely positioning elements on reveal and moving them later. This also smelled, but it got me playing with ResizeObserver
s. After sleeping on the problem, a better answer presented itself: leave the flow alone and use IntersectionObserver
s and ResizeObserver
s to reserve vertical space using CSS's new contain-intrinsic-size
property once elements have been laid out.
I'd dismissed contain-intrinsic-size
for use in the base stylesheet because it's impossible to know widths and heights to reserve. IntersectionObserver
s and ResizeObserver
s, however, guarantee that we can know the bounding boxes of the elements cheaply (post layout) and use the sizing info they provide to reserve space should the system decide to stop laying out elements (setting their height to 0) when they leave the viewport. We can still use contain-intrinsic-size
, however, to reserve a conservative "place-holder" for a few purposes:
- If our element would be content sized, we can give it a narrow
intrinsic-size
width; the browser will still give us the natural width upon first layout. - Reserving some height for each element creates space for
content-visibility
s internal Intersection Observer to interact with. If we don't provide a height (in this case,500px
as a guess), fast scrolls will encounter "bunched" elements upon hitting the bottom, andcontent-visiblity: auto
will respond by laying out all of the elements at once, which isn't what we want. Creating space means we are more likely to lay them out more granularly.
The snippet below works around a Chrome bug with applying content-visibility: auto;
to all <articles>
, forcing initial paint of the first, then allowing it to later be elided. Observers update sizes in reaction to layouts or browser resizing, updating place-holder sizes while allowing the natural flow to be used. Best of all, it acts a progressive enhancement:
<!-- Basic document structure -->
<html>
<head>
<style>
/* Workaround for Chrome bug, part 1
*
* Chunk rendering for all but the first article.
*
* Eventually, this selector will just be:
*
* body > main > article
*
*/
body > main > article + article {
content-visibility: auto;
/* Add vertical space for each "phantom" */
contain-intrinsic-size: 10px 500px;
}
</style>
<head>
<body>
<!-- header elements -->
<main>
<article><!-- ... --></article>
<article><!-- ... --></article>
<article><!-- ... --></article>
<!-- ... -->
<!-- Inline, at the bottom of the document -->
<script type="module">
let eqIsh = (a, b, fuzz=2) => {
return (Math.abs(a - b) <= fuzz);
};
let rectNotEQ = (a, b) => {
return (!eqIsh(a.width, b.width) ||
!eqIsh(a.height, b.height));
};
// Keep a map of elements and the dimensions of
// their place-holders, re-setting the element's
// intrinsic size when we get updated measurements
// from observers.
let spaced = new WeakMap();
// Only call this when known cheap, post layout
let reserveSpace = (el, rect=el.getClientBoundingRect()) => {
let old = spaced.get(el);
// Set intrinsic size to prevent jumping on un-painting:
// https://drafts.csswg.org/css-sizing-4/#intrinsic-size-override
if (!old || rectNotEQ(old, rect)) {
spaced.set(el, rect);
el.attributeStyleMap.set(
"contain-intrinsic-size",
`${rect.width}px ${rect.height}px`
);
}
};
let iObs = new IntersectionObserver(
(entries, o) => {
entries.forEach((entry) => {
// We don't care if the element is intersecting or
// has been laid out as our page structure ensures
// they'll get the right width.
reserveSpace(entry.target,
entry.boundingClientRect);
});
},
{ rootMargin: "500px 0px 500px 0px" }
);
let rObs = new ResizeObserver(
(entries, o) => {
entries.forEach((entry) => {
reserveSpace(entry.target, entry.contentRect);
});
}
);
let articles =
document.querySelectorAll("body > main > article");
articles.forEach((el) => {
iObs.observe(el);
rObs.observe(el);
});
// Workaround for Chrome bug, part 2.
//
// Re-enable browser management of rendering for the
// first article after the first paint. Double-rAF
// to ensure we get called after a layout.
requestAnimationFrame(() => {
requestAnimationFrame(() => {
articles[0].attributeStyleMap.set(
"content-visibility",
"auto"
);
});
});
</script>
This solves most of our previous problems:
- All articles can now take
content-visibility: auto
, removing the need to continually lay out and render the first article when off-screen. content-visibility
functions as a hint, enhancing the page's performance (where supported) without taking responsibility for layout or positioning.- Elements grow and shrink in response to content changes
- Presuming browsers do a good job of scroll anchoring, this solution is robust to both upward and downward scrolls, plus resizing.
Debugging this wouldn't have been possible without help from Vladimir Levin and Una Kravets whose article has been an indispensable reference, laying out the pieces for me to eventually (too-slowly) cobble into a full solution.