Reading List

The most recent articles from a list of feeds I subscribe to.

A Management Maturity Model for Performance

Since 2015 I have been lucky to collaborate with more than a hundred teams building PWAs and consult on some of the world's largest sites. Engineers and managers on these teams universally want to deliver great experiences and have many questions about how to approach common challenges. Thankfully, much of what once needed hand-debugging by browser engineers has become automated and self-serve thanks to those collaborations.

Despite advances in browser tooling, automated evaluation, lab tools, guidance, and runtimes, teams I've worked with consistently struggle to deliver minimally acceptable performance with today's popular frameworks. This is not a technical problem per se — it's a management issue, and one that teams can conquer with the right frame of mind and support.

What is Performance?

It may seem a silly question, but what is performance, exactly?

This is a complex topic, but to borrow from a recent post, web performance expands access to information and services by reducing latency and variance across interactions in a session, with a particular focus on the tail of the distribution (P75+). Performance isn't a binary and there are no silver bullets.

Only teams that master their systems can make intentional trade-offs. Organisations that serve their tools will tread water no matter how advanced their technology, while groups that understand and intentionally manage their systems can succeed on any stack.[1]

Value Propositions

The value of performance is deeply understood within a specific community and in teams that have achieved high maturity. But outside those contexts it can be challenging to communicate. One helpful lens is to view the difference between good and bad performance as a gap between expectations and reality.

For executives that value:

Performance is rarely the single determinant of product success, but it can be the margin of victory. Improving latency and reducing variance allows teams to test other product hypotheses with less noise. A senior product leader recently framed a big performance win as creating space that allows us to be fallible in other areas.

Protecting the Commons

Like accessibility, security, UI coherence, privacy, and testability, performance is an aggregate result. Any single component of a system can regress latency or create variance, which means that like other cross-cutting product properties, performance must be managed as a commons. The approaches that work over time are horizontal, culturally-based, and require continual investment to sustain.

Teams I've consulted with are too often wrenched between celebration over launching "the big rewrite" and the morning-after realisation that the new stack is tanking business metrics.

Now saddled with the excesses of npm, webpack, React, and a thousand promises of "great performance" that were never critically evaluated, it's easy for managers to lose hope. These organisations sometimes spiral into recrimination and mistrust. Where hopes once flourished, the horrors of a Bundle Buddy readout looms. Who owns this code? Why is it there? How did it get away from the team so quickly?

Many "big rewrite" projects begin with the promise of better performance. Prototypes "seem fast", but nobody's actually benchmarking them on low-end hardware. Things go fine for a while, but when sibling teams are brought in to integrate late in the process, attention to the cumulative experience may suffer. Before anyone knows it, the whole thing is as slow as molasses, but "there's no going back"... and so the lemon launches with predictably sour results.

In the midst of these crises, thoughtful organisations begin to develop a performance management discipline. This, in turn, helps to create a culture grounded in high expectations. Healthy performance cultures bake the scientific method into their processes and approaches; they understand that modern systems are incredibly complex and that nobody knows everything — and so we learn together and investigate the unknown to develop an actionable understanding.

Products that maintain a healthy performance culture elevate management of latency, variance, and other performance attributes to OKRs because they understand how those factors affect the business.

Levels of Performance Management Maturity

Performance management isn't widely understood to be part of what it means to operate a high-functioning team. This is a communcation challenge with upper management, but also a potential differentiator or even a strategic advantage. Teams that develop these advantages progress through a hierarchy of management practice phases. In drafting this post, I was pointed to similar work developed independently by others[3]; that experienced consultants have observed similar trends helps give me confidence in this assessment:

Level 0: Bliss

Hear no evil, see no evil, speak no evil.
Photo by von Vix

Level 0 teams do not know they have a problem. They may be passively collecting some data (e.g., through one of the dozens of analytics tools they've inevitably integrated over the years), but nobody looks at it. It isn't anyone's job description to do so.

Folks at this level of awareness might also simply assume that "it's the web, of course it's slow" and reach for native apps as a panacea (they aren't). The site "works" on their laptops and phones. What's the problem?

Management Attributes

Managers in Level 0 teams are unaware that performance can be a serious product problem; they instead assume the technology they acquired on the back of big promises will be fine. This blindspot usually extends up to the C-suite. They do not have latency priorities and they uncritically accept assertions that a tool or architecture is "performant" or "blazing fast". They lack the technical depth to validate assertions, and move from one framework to another without enunciating which outcomes are good and which are unacceptable. Faith-based product management, if you will.

Level 0 PMs fail to build processes or cultivate trusted advisors to assess the performance impacts of decisions. These organisations often greenlight rewrites because we can hire easily for X, and we aren't on it yet. These are vapid narratives, but Level 0 managers don't have the situational awareness, experience, or confidence to push back appropriately.

These organisations may perform incidental data collection (from business analytics tools, e.g.) but are inconsistently reviewing performance metrics or considering them when formulating KPIs and OKRs.

Level 1: Fire Fighting

Shit's on fire, yo.
Photo by Jay Heike

At Level 1, managers will have been made aware that the performance of the service is unacceptable.[4]

Service quality has degraded so much that even fellow travelers in the tech privilege bubble[4:1] have noticed. Folks with powerful laptops, new iPhones, and low-latency networks are noticing, which is a very bad sign. When an executive enquires about why something is slow, a response is required.

This is the start of a painful remediation journey that can lead to heightened performance management maturity. But first, the fire must be extinguished.

Level 1 managers will not have a strong theory about what's amiss, and an investigation will commence. This inevitably uncovers a wealth of potential metrics and data points to worry about; a few of those will be selected and tracked throughout the remediation process. But were those the right ones? Will tracking them from now on keep things from going bad? The first firefight instills gnawing uncertainty about what it even means to "be fast". On teams without good leadership or a bias towards scientific inquiry, it can be easy for Level 1 investigations to get preoccupied with one factor while ignoring others. This sort of anchoring effect can be overcome by pulling in external talent, but this is often counter-intuitive and sometimes even threatening to green teams.

Competent managers will begin to look for more general "industry standard" baseline metrics to report against their data. The industry's default metrics are moving to a better place, but Level 1 managers are unequipped to understand them deeply. Teams at Level 1 (and 2) may blindly chase metrics because they have neither a strong, shared model of their users, nor an understanding of their own systems that would allow them to focus more tightly on what matters to the eventual user experience. They aren't thinking about the marginal user yet, so even when they do make progress on directionally aligned metrics, nasty surprises can reoccur.

Low levels of performance management maturity are synonymous with low mastery of systems and an undeveloped understanding of user needs. This leaves teams unable to quickly track down culprits when good scores on select metrics fail to consistently deliver great experiences.

Management Attributes

Level 1 teams are in transition, and managers of those teams are in the most fraught part of their journey. Some begin an unproductive blame game, accusing tech leads of incompetence, or worse. Wise PMs will perceive performance remediation work as akin to a service outage and apply the principles of observability culture, including "blameless postmortems".

It's never just one thing that's amiss on a site that prompts Level 1 awareness. Effective managers can use the collective learning process of remediation to improve a team's understanding of its systems. Discoveries will be made about the patterns and practices that lead to slowness. Sharing and celebrating these discoveries is a crucial positive attribute.

Strong Level 1 managers will begin to create dashboards and request reports about factors that have previously caused problems in the product. Level 1 teams tend not to staff or plan for continual attention to these details, and the systems often become untrustworthy.

Teams can get stuck at Level 1, treating each turn through a Development ➡️ Remediation ➡️ Celebration loop as "the last time". This is pernicious for several reasons. Upper management will celebrate the first doused fire but will begin to ask questions about the fourth and fifth blazes. Are their services just remarkably flammable? Is there something wrong with their team? Losing an organisation's confidence is a poor recipe for maximising personal or group potential.

Next, firefighting harms teams, and doubly so when management is unwilling to adopt incident response framing. Besides potential acrimony, each incident drains the team's ability to deliver solutions. Noticeably bad performance is an expression of an existing feature working below spec, and remediation is inherently in conflict with new feature development. Level 1 incidents are de facto roadmap delays.

Lastly, teams stuck in a Level 1 loop risk losing top talent. Many managers imagine this is fine because they're optimising for something else, e.g. the legibility of their stack to boot camp grads. A lack of respect for the ways that institutional knowledge accelerates development is all too common.

It's difficult for managers who do not perceive the opportunities that lie beyond firefighting to comprehend how much stress they're placing on teams through constant remediation. Fluctuating between Levels 1 and 0 ensures a team never achieves consistent velocity, and top performers hate failing to deliver.

The extent to which managers care about this — and other aspects of the commons, such as a11y and security — is a reasonable proxy for their leadership skills. Line managers can prevent regression back to Level 0 by bolstering learning and inquiry within their key personnel, including junior developers who show a flair for performance investigation.

Level 2: Global Baselines & Metrics

Think globally, then reset.
The global baseline isn't what folks in the privilege bubble assume.

Thoughtful managers become uncomfortable as repeated Level 1 incidents cut into schedules, hurt morale, and create questions about system architecture. They sense their previous beliefs about what's "reasonable" need to be re-calibrated... but against what baseline?

It's challenging for teams climbing the maturity ladder to sift through the many available browser and tool-vendor data points to understand which ones to measure and manage. Selected metrics are what influence future investments, and identifying the right ones allows teams to avoid firefighting and prevent blindspots.

A diagram of the W3C Navigation Timing timline events
Browsers provide a lot of data about site performance. Harnessing it requires a deep understanding of the product and its users.

Teams looking to grow past Level 1 develop (or uncover they already had) Real User Monitoring ("RUM data") infrastructure in previous cycles. They will begin to report to management against these aggregates.

Against the need for quicker feedback and a fog of metrics, managers who achieve Level 2 maturity look for objective, industry-standard reference points that correlate with business success. Thankfully, the web performance community has been busy developing increasingly representative and trustworthy measurements. Still, Level 2 teams will not yet have learned to live with the dissatisfaction that lab measurements cannot always predict a system's field behavior. Part of mastery is accepting that the system is complex and must be investigated, rather than fully modeled. Teams at Level 2 are just beginning to learn this lesson.

Strong Level 2 managers acknowledge that they don't know what they don't know. They calibrate their progress against studies published by peers and respected firms doing work in this area. These data points reflect a global baseline that may (or may not) be appropriate for the product in question, but they're significantly better than nothing.

Management Attributes

Managers who bring teams to Level 2 spread lessons from remediation incidents, create a sense of shared ownership over performance, and try to describe performance work in terms of business value. They work with their tech leads and business partners to adopt industry-standard metrics and set expectations based on them.

Level 2 teams buy or build services that help them turn incidental data collection into continual reporting against those standard metrics. These reports tend to focus on averages and may not be sliced to focus on specific segments (e.g., mobile vs. desktop) and geographic attributes. Level 2 (and 3) teams may begin drowning in data, with too many data points being collected and sliced. Without careful shepherding to uncover the most meaningful metrics to the business, this can engender boredom and frustration, leading to reduced focus on important RUM data sources.

Strong Level 2 managers will become unsatisfied with how global rules of thumb and metrics fail to map directly into their product's experience and may begin to look for better, more situated data that describe more of the user journeys they care about. The canniest Level 2 managers worry that their teams lack confidence that their work won't regress these metrics.

Teams that achieve Level 2 competence can regress to Level 1 under product pressure (removing space to watch and manage metrics), team turnover, or assertions that "the new architecture" is somehow "too different" to measure.

Level 3: P75+, Site-specific Baselines & Metrics

Level 3 teams are starting to fly the plane instead of being passengers on an uncomfortable journey
Photo by Launde Morel

The unease of strong Level 2 management regarding metric appropriateness can lead to Level 3 awareness and exploration. At this stage, managers and TLs become convinced that the global numbers they're watching "aren't the full picture" — and they're right!

At Level 3, teams begin to document important user journeys within their products and track the influence of performance across the full conversion funnel. This leads to introducing metrics that aren't industry-standard, but are more sensitive and better represent business outcomes. The considerable cost to develop and validate this understanding seems like a drop in the bucket compared to flying blind, so Level 3 teams do it, in part, to eliminate the discomfort of being unable to confidently answer management questions.

Substantially enlightened managers who reach Level 3 will have become accustomed to percentile thinking. This often comes from their journey to understand the metrics they've adopted at Levels 1 and 2. The idea that the median isn't the most important number to track will cause a shift in the internal team dialogue. Questions like, "Was that the P50 number?" and "What does it look like at P75 and P90?" will become part of most metrics review meetings (which are now A Thing (™).

Percentiles and histograms become the only way to talk about RUM data in teams that reach Level 3. Most charts have three lines — P75, P90, and P95 — with the median, P50, thrown in as a vanity metric to help make things legible to other parts of the organisation that have yet to begin thinking in distributions.

Treating data as a distribution fundamentally enables comparison and experimentation because it creates a language for describing non-binary shifts. Moving traffic from one histogram bucket to another becomes a measure of success, and teams at Level 3 begin to understand their distributions are nonparametric, and they adopt more appropriate comparisons in response.

Management Attributes

Level 3 managers and their teams are becoming scientists. For the first time, they will be able to communicate with confidence about the impact of performance work. They stop referring to "averages", understand that medians (P50) can tell a different story than the mean, and become hungry to explore the differences in system behavior at P50 and outlying parts of the distribution.

Significant effort is applied to the development and maintenance of custom metrics and tools. Products that do not report RUM data in more sliceable ways (e.g., by percentile, geography, device type, etc.) are discarded for those that better support an investigation.

Teams achieving this level of discipline about performance begin to eliminate variance from their lab data by running tests in "less noisy" environments than somewhere like a developer's laptop, a shared server, or a VM with underlying system variance. Low noise is important because these teams understand that as long as there's contamination in the environment, it is impossible to trust the results. Disaster is just around the corner when teams can't trust tests designed to keep the system from veering into a bad state.

Level 3 teams also begin to introduce a critical asset to their work: integration of RUM metrics reporting with their experimentation frameworks. This creates attribution for changes and allows teams to experiment with more confidence. Modern systems are incredibly complex, and integrating this experimentation into the team's workflow only intensifies as groups get ever-more sophisticated moving forward.

Teams can regress from Level 3 because the management structures that support consistent performance are nascent. Lingering questions about the quality of custom metrics can derail or stall progress, and some teams can get myopic regarding the value of RUM vs. lab data (advanced teams always collect both and try to cross-correlate, but this isn't yet clear to many folks who are new to Level 3). Viewing metrics with tunnel vision and an unwillingness to mark metrics to market are classic failure modes.

Level 4: Variance Control & Regression Prevention

Level 4 teams are beginning to understand and manage the tolerances of their service.
Photo by Mastars

Strong Level 3 managers will realise that many performance events (both better and worse than average) occur along a user journey. This can be disorienting! Everything one thought they knew about how "it's going" is invalidated all over again. The P75 latency for interaction (in an evenly distributed population) isn't the continuous experience of a single user; it's every fourth tap!

Suddenly, the idea of managing averages looks naive. Medians have no explanatory power and don't even describe the average session! Driving down the median might help folks who experience slow interactions, but how can the team have any confidence about that without constant management of the tail latency?

This new understanding of the impact that variance has on user experiences is both revelatory and terrifying. The good news is that the tools that have been developed to this point can serve to improve even further.

Level 4 teams also begin to focus on how small, individually innocuous changes add up to a slow bleed that can degrade the experience over time. Teams that have achieved this sort of understanding are mature enough to forecast a treadmill of remediation in their future and recognise it as a failure mode. And failure modes are avoidable with management processes and tools, rather than heroism or blinding moments of insight.

Management Attributes

Teams that achieve Level 4 maturity almost universally build performance ship gates. These are automated tests that watch the performance of PRs through a commit queue, and block changes that tank the performance of important user flows. This depends on the team having developed metrics that are known to correlate well with user and business success.

This implies all of the maturity of the previous levels because it requires a situated understanding of which user flows and scenarios are worth automating. These tests are expensive to run, so they must be chosen well. This also requires an investment in infrastructure and continuous monitoring. Making performance more observable, and creating a management infrastructure that avoids reactive remediation is the hallmark of a manager who has matured to Level 4.

Many teams on the journey from Level 3 to 4 will have built simpler versions of these sorts of gates (bundle size checks, e.g.). These systems may allow for small continuous increases in costs. Over time, though, these unsophisticated gates become a bad proxy for performance. Managers at Level 4 learn from these experiences and build or buy systems to watch trends over time. This monitoring ought to include data from both the lab and the field to guard against "metric drift". These more sophisticated monitoring systems also need to be taught to alert on cumulative, month-over-month and quarter-over-quarter changes.

Level 4 maturity teams also deputise tech leads and release managers to flag regressions along these lines, and reward them for raising slow-bleed regressions before they become crises. This responsibility shift, backed up by long-run investments and tools, is one of the first stable, team-level changes that can work against cultural regression. For the first time, the team is considering performance on longer time scales. This also begins to create organisational demand for latency budgeting and slowness to be attributed to product contributions.

Teams that achieve Level 4 maturity are cautious acquirers of technology. They manage on an intentional, self-actualised level and value an ability to see through the fog of tech fads. They do bake-offs and test systems before committing to them. They ask hard questions about how any proposed "silver bullets" will solve the problems that they have. They are charting a course based on better information because they are cognizant that it is both valuable and potentially available.

Level 4 teams begin to explicitly staff a "performance team", or a group of experts whose job it is to run investigations and drive infrastructure to better inform inquiry. This often happens out of an ad-hoc virtual team that forms in earlier stages but is now formalised and has long-term staffing.

Teams can quickly regress from Level 4 maturity through turnover. Losing product leaders that build to Level 4 maturity can set groups back multiple maturity levels in short order, and losing engineering leaders who have learned to value these properties can do the same. Teams are also capable of losing this level of discipline and maturity by hiring or promoting the wrong people. Level 4 maturity is cultural and cultures need to be defended and reinforced to maintain even the status quo.

Level 5: Strategic Performance

Level 5 teams have understood the complexity of their environment and can make tradeoffs with confidence.
Photo by Colton Sturgeon

Teams that fully institutionalise performance management come to understand it as a strategic asset.

These teams build management structures and technical foundations that grow their performance lead and prevent cultural regressions. This includes internal training, external advocacy and writing[5], and the staffing of research work to explore the frontier of improved performance opportunities.

Strategic performance is a way of working that fully embeds the idea that "faster is better", but only when it serves user needs. Level 5 maturity managers and teams will gravitate to better-performing options that may require more work to operate. They have learned that fast is not free, but it has cumulative value.

These teams also internally evangelise the cause of performance. Sibling teams may not be at the same place, so they educate about the need to treat performance as a commons. Everyone benefits when the commons is healthy, and all areas of the organisation suffer when it regresses.

Level 5 teams institute "latency budgets" for fractional feature rollouts. They have structures (such as managers or engineering leadership councils) that can approve requests for non-latency-neutral changes that may have positive business value. When business leaders demand the ability to ram slow features into the product, these leaders are empowered to say no.

Lastly, Level 5 teams are focused on the complete user journey. Teams in this space can make trades intelligently, moving around code and time within a system they have mastered to ensure the best possible outcomes in essential flows.

Management Attributes

Level 3+ team behaviours are increasingly illegible to less-advanced engineers and organisations. At Level 5, serious training and guardrails are required to integrate new talent. Most hires will not yet share the cultural norms that a strategically performant organisation uses to deliver experiences with consistent quality.[6]

Strategy is what you do differently from the competition, and Level 5 teams understand their way of working is a larger advantage than any single optimisation. They routinely benchmark against their competition on important flows and can understand when a competitor has taken the initiative to catch up (it rarely happens through a single commit or launch). These teams can respond at a time of their choosing because their lead will have compounded. They are fully out of firefighting mode.

Level 5 teams do not emerge without business support. They earn space to adopt these approaches because the product has been successful (thanks in part to work at previous levels). Level 5 culture can only be defended from a position of strength. Managers in this space are operating for the long term, and performance is understood to be foundational to every new feature or improvement.

Teams at Level 5 degrade more slowly than at previous levels, but it does happen. Sometimes, Level 5 teams are poor communicators about their value and their values, and when sibling teams are rebuffed, political pressure can grow to undermine leaders. More commonly, enough key people leave a Level 5 team for reasons unrelated to performance management, like when the hard-won institutional understanding of what it takes to excel is lost. Sometimes, simply failing to reward continual improvement can drive folks out. Level 5 managers need to be on guard regarding their culture and their value to the organisation as much as the system's health.

Uneven Steps, Regression, & False Starts

It's possible for strong managers and tech leads to institute Level 1 discipline by fiat. Level 2 is perhaps possible on a top-down basis in a small or experienced team. Beyond that, though, maturity is a growth process. Progression beyond global baseline metrics requires local product and market understanding. TLs and PMs need to become curious about what is and isn't instrumented, begin collecting data, then start the directed investigations necessary to uncover what the system is really doing in the wild. From there, tools and processes need to be built to recreate those tough cases on the lab bench in a repeatable way, and care must be taken to continually re-validate those key user journeys against the evolving product reality.

Advanced performance managers build groups that operate on mutual trust to explore the unknown and then explain it out to the rest of the organisation. This means that advancement through performance maturity isn't about tools.

Managers who get to Level 4 are rare, but the number who imagine they are could fill stadiums because they adopted the technologies that high-functioning leaders encourage. But without the trust, funding to enquire and explore, and an increasingly fleshed-out understanding of users at the margins, adopting a new monitoring tool is a hollow expenditure. Nothing is more depressing than managerial cosplay.

It's also common for teams to take several steps forward under duress and regress when heroics stop working, key talent burns out, and the managerial focus moves on. These aren't fatal moments, but managers need to be on the lookout to understand if they support continual improvement. Without a plan for an upward trajectory, product owners are putting teams on a loop of remediation and inevitable burnout... and that will lead to regression.

The Role of Senior Management

Line engineers want to do a good job. Nobody goes to work to tank the product, lose revenue, or create problems for others down the line. And engineers are trained to value performance and quality. The engineering mindset is de facto optimising. What separates Level 0 firefighting teams from those that have achieved self-actualised Level 5 execution is not engineering will; it's context, space, and support.

Senior management sending mixed signals about the value of performance is the fastest way to degrade a team's ability to execute. The second-fastest is to use blame and recrimination. Slowness has causes, but the solution isn't to remove the folks that made mistakes, but rather to build structures that support iteration so they can learn. Impatience and blame are not assets or substitutes for support to put performance consistently on par with other concerns.

Teams that reach top-level performance have management support at the highest level. Those managers assume engineers want to do a good job but have the wrong incentives and constraints, and it isn't the line engineer's job to define success — it's the job of management.

Questions for Senior Managers

Senior managers looking to help their teams climb the performance management maturity hill can begin by asking themselves a few questions:

  • Do we understand how better performance would improve our business?

    • Is there a shared understanding in the leadership team that slowness costs money/conversions/engagement/customer-success?
    • Has that relationship been documented in our vertical or service?
    • Do we know what "strategic performance" can do for the business?
  • What constraints have we given the team?

  • Have we developed a management fluency wth histograms and distributions over time?

    • Do we write OKRs for performance?
    • Are they phrased in terms of marginal device and network targets, as well as distributions?
  • What support do we give teams that want to improve performance?

    • Do folks believe they can appeal directly to you if they feel the system's performance will be compromised by other decisions?
    • Can folks (including PMs, designers, and SREs — not just engineers) get promoted for making the site faster?
    • Can middle managers appeal to performance as a way to push back on feature requests?
    • Are there systems in place for attributing slowness to changes over time?
    • Can teams win kudos for consistent, incremental performance improvement?
    • Can a feature be blocked because it might regress performance?
    • Can teams easily acquire or build tools to track performance?
  • What support do we give mid-level managers who push back on shiny tech in favour of better performance?

    • Have we institutionalised critial questions for adopting new technologies?
    • Are aspects of the product commons (e.g., uptime, security, privacy, a11y, performance) managed in a coherent way?
    • Do managers get as much headcount and funding to make steady progress as they would from proposing rewrites?
  • Have we planned to staff a performance infrastructure team?

    • It's the job of every team to monitor and respond to performance challenges, but will there be a group that can help wrangle the data to enable everyone to do that?
    • Can any group in the organisation serve as a resource for other teams that are trying to get started in their latency and variance learning journeys?

The answers to these questions help organisations calibrate how much space they have created to scientifically interrogate their systems. Computers are complex, and as every enterprise becomes a "tech company", becoming intentional about these aspects is as critical as building DevOps and Observability to avoid downtime.

It's always cheaper in the long run to build understanding than it is to fight fires, and successful management can create space to unlock their team's capacity.

"o11y, But Make it Performance"

Mature technology organisations may already have and value a discipline to manage performance: "Site Reliability Engineering" (SRE), aka "DevOps", aka "Observability". These folks manage and operate complex systems and work to reduce failures, which looks a lot like the problems of early performance maturity teams.

These domains are linked: performance is just another aspect of system mastery, and the tools one builds to manage approaches like experimental, flagged rollouts need performance to be accounted for as a significant aspect of the success of a production spike.

Senior managers who want to build performance capacity can push on this analogy. Performance is like every other cross-cutting concern; important, otherwise un-owned, and a chance to differentiate. Managers have a critical role to forge solidarity between engineers, SREs, and other product functions to get the best out of their systems and teams.

Everyone wants to do a great job; it's the manager's role to define what that means.

It takes a village to keep my writing out of the ditch, so my deepest thanks go to Annie Sullivan, Jamund Ferguson, Andy Tuba, Barry Pollard, Bruce Lawson, Tanner Hodges, Joe Liccini, Amiya Gupta, Dan Shappir, Cheney Tsai, and Tim Kadlec for their invaluable comments and corrections on drafts of this post.


  1. High-functioning teams can succeed with any stack, but they will choose not to. Good craftsmen don't blame their tools, nor do they carry deficient implementations.

    Per Kellan Elliot-McCrea's classic "Questions for new technology", this means that high-functioning teams will not be on the shiniest stack. Teams choices that are highly correlated with hyped solutions are a warning sign, not an asset. And while "outdated" systems are unattractive, they also don't say much at all about the quality of the product or the team.

    Reading this wrong is a sure tell of immature engineers and managers, whatever their title. ↩︎

  2. An early confounding factor for teams trying to remediate performance issues is that user intent matters a great deal, and thus the value of performance will differ based on context. Users who have invested a lot of context with a service will be less likely to bounce based on bad performance than those who are "just browsing". For example, a user that has gotten to the end of a checkout flow or are using a government-mandated system may feel they have no choice. This isn't a brand or service success case (failing to create access is always a failure), but when teams experience different amounts of elasticity in demand vs. performance, it's always worth trying to understand the user's context and intent.

    Users that "succeed" but have a bad time aren't assets for a brand or service, they're likely to be ambasassadors for any other way to accomplish their tasks. That's not great, long-term, for a team or for their users. ↩︎

  3. Some prior art was brought to my attention by people who reviewed earlier drafts of this post; notably this 2021 post by the Splunk team and the following tweet by the NCC Group from 2016 (as well as a related PowerPoint presentation):

    Where are you on the #webperf maturity model? ow.ly/miAi3020A9G #perfmatters

    Image from Tweet

    It's comforting that we have all independently formulated roughly similar framing. People in the performance community are continually learning from each other, and if you don't take my formulation, I hope you'll consider theirs. ↩︎

  4. Something particularly problematic about modern web development is the way it has reduced solidarity between developers, managers, and users. These folks now fundamentally experience the same sites differently, thanks to the shocking over-application of client-side JavaScript to every conceivable problem.

    This creates structural illegibility of budding performance crises in new, uncomfortably exciting ways.

    In the desktop era, developers and upper management would experience sites through a relatively small range of screen sizes and connection conditions. JavaScript was applied in the breach when HTML and CSS couldn't meet a need.[7] Techniques like Progressive Enhancement ensured that the contribution of CPU performance to the distribution of experiences was relatively small. When content is predominantly HTML, CSS, and images, browsers are able to accelerate processing across many cores and benefit from the ability to incrementally present the results.

    By contrast, JavaScript-delivered UI strips the browser of its ability to meaningfully reorder and slice up work so that it prioritises responsiveness and smooth animations. JavaScript is the fuck it, we'll do it live way to construct UI, and stresses the relative performance of a single core more than competing approaches. Because JavaScript is, byte for byte, the most expensive thing you can ask a browser to process, this stacks the difficulty involved in doing a good job on performance. JavaScript-driven UI is inherently working with a smaller margin for error, and that means today's de facto approach of using JavaScript for roughly everything leaves teams with much less headroom.

    Add this change in default architecture to the widening gap between the high end (where all developers and managers live) and the median user. It's easy to understand how perfectly mistimed the JavaScript community's ascendence has been. Not since the promise of client-side Java has the hype cycle around technology adoption been more out of step with average usability.

    Why has it gone this badly?

    In part because of the privilege bubble. When content mainly was markup, performance problems were experienced more evenly. The speed of a client device isn't the limiting site speed factor in an HTML-first world. When database speed or server capacity is the biggest variable, issues affect managers and executives at the same rate they impact end users.

    When the speed of a device dominates, wealth correlates heavily with performance. This is why server issues reliably get fixed, but JavaScript bloat has continued unabated for a decade. Rich users haven't borne the brunt of these architectural shifts, allowing bad choices to fly under the radar much longer which, in turn, increase the likelihood of expensive remediation incidents.

    Ambush by JavaScript is a bad time, and when managers and execs only live in the privilege bubble, it's users and teams who suffer most. ↩︎ ↩︎

  5. Managers may fear that by telling everyone about how strategic and important performance has become to them, that their competitiors will wise up and begin to out-execute on the same dimension.

    This almost never happens, and the risks are low. Why? Because, as this post exhaustively details, the problems that prevent the competition from achieving high-functioning performance are not strictly technical. They cannot — and more importantly, will not — adopt tools and techniques you evangelise because it is highly unlikely that they are at a maturity level that would allow them to benefit. In many cases, adding another tool to the list for a Level 1-3 team to consider can even slow down and confound them.

    Strategic performance is hard to beat because it is hard to construct at a social level. ↩︎

  6. Some hires or transfers into Level 5 teams will not easily take to shared performance values and training.

    Managers should anticipate pushback from these quarters and learn to re-assert the shared cultural norms that are critical to success.

    There's precious little space in a Level 5 team for résumé-oriented development because a focus on the user has evacuated the intellectual room that hot air once filled. Thankfully, this can mostly be avoided through education, support, and clear promotion criteria that align to the organisation's evolved way of working.

    Nearly everyone can be taught, and great managers will be on the lookout to find folks who need more support. ↩︎

  7. Your narrator built JavaScript frameworks in the desktop era; it was a lonely time compared to the clogged market for JavaScript tooling today. The complexity of what we were developing for was higher than nearly every app I see today; think GIS systems, full PIM (e.g., email, calendar, contacts, etc.) apps, complex rich text editing, business apps dealing with hundreds of megabytes worth of normalised data in infinite grids, and BI visualisations.

    When the current crop of JavaScript bros tells you they need increased complexity because business expectations are higher now, know that they are absolutely full of it. The mark has barely moved in most experiences. The complexity of apps is not much different, but the assumed complexity of solutions is. That experiences haven't improved for most users is a shocking indictment of the prevailing culture. ↩︎

Cache and Prizes

If you work on a browser, you will often hear remarks like, Why don't you just put [popular framework] in the browser?

This is a good question — or at least it illuminates how browser teams think about tradeoffs. Spoiler: it's gnarly.

Before we get into it, let's make the subtext of the proposal explicit:

  • Libraries provided this way will be as secure and privacy-preserving as every other browser-provided API.
  • Browsers will cache tools popular among vocal, leading-edge developers.
  • There's plenty of space for caching the most popular frameworks.
  • Developers won't need to do work to realise a benefit.

None of this holds.

The best available proxy data also suggests that shared caches would have a minimal positive effect on performance. So, it's an idea that probably won't work the way anyone wants it to, likely can't do much good, and might do a lot of harm.[1]

Understanding why requires more context, and this post is an attempt to capture the considerations I've been outlining on whiteboards for more than a decade.

Trust Falls

Every technical challenge in this design space is paired with an even more daunting governance problem. Like other successful platforms, the web operates on a finely tuned understanding of risk and trust, and deprecations are particularly explosive.[2] As we will see, pressure to remove some libraries to make space for others will be intense.

That's the first governance challenge: who will adjudicate removals? Browser vendors? And if so, under what rules? The security implications alone mean that whomever manages removals will need to be quick, fair, and accurate. Thar be dragons.

Removal also creates a performance cliff for content that assumes cache availability. Vendors have a powerful incentive to "not break the web ". Since cache sizes will be limited (more on that in a second), the predictable outcome will be slower library innovation. Privileging a set of legacy frameworks will create disincentives to adopt modern platform features and reduce the potential value of modern libraries. Introducing these incentives would be an obvious error for a team building a platform.

Entropic Forces

An unstated, but ironclad, requirement of shared caches is that they are uniform, complete, and common.

Browsers now understand the classic shared HTTP cache behaviour as a privacy bug.

Cache timing is a fingerprinting vector that is fixed via cache partitioning by origin. In the modern cache-partitioned world, if a user visits alice.com, which fetches https://example.com/cat.gif, then visits bob.com, which displays the same image, it will be requested again. Partitioning by origin ensures that bob.com can't observe any details about that user's browsing history. De-duplication can prevent multiple copies on disk, but we're out of luck on reducing network costs.

A shared library cache that isn't a fingerprinting vector must work like one big zip file: all or nothing. If a resource is in that bundle — and enough users have the identical bundle — using a resource from it on both alice.com and bob.com won't leak information about the user's browsing history.

Suppose a user has only downloaded part of the cache. A browser couldn't use resources from it, lest missing resources (leaked through a timing side channel) uniquely identify the user. These privacy requirements put challenging constraints on the design and pratical effectiveness of a shared cache.

Variations on a Theme

But the completeness constraint is far from our only design challenge. When somebody asks, Why don't you just put [popular framework] in the browser?, the thoughtful browser engineer's first response is often which versions?

This is not a dodge.

The question of precaching JavaScript libraries has been around since the Prototype days. It's such a perennial idea that vendors have looked into the real-world distribution of library use. One available data source is Google's hosted JS libraries service.

Last we looked, jQuery (still the most popular JS tool by a large margin) showed usage almost evenly split between five or six leading versions, with a long tail that slowly tapers. This reconfirms observations from the HTTP Archive, published by Steve Souders in 2013.

TL;DR?: Many variants of the same framework will occur in the cache's limited space. Because a few "unpopular" legacy tools are large, heavily used, and exhibit flat distribution of use among their top versions, the breadth of a shared cache will be much smaller than folks anticipate.

The web is not centrally managed, and sites depend on many different versions of many different libraries. Browsers are unable to rely on semantic versioning or trust file names because returned resources must contain the exact code that developers request. If browsers provide a similar-but-slightly-different file, things break in mysterious ways, and "don't break the web" is Job #1 for browsers.

Plausible designs must avoid keying on URLs or filenames. Another sort of opt-in is required because:

  • It's impossible to capture most use of a library with a small list of URLs.
  • Sites will need to continue to serve fallback copies of their dependencies.[3]
  • File names are not trustworthy indicators of file contents.

Subresource Integrity (SRI) to the rescue! SRI lets developers add a hash of the resource they're expecting as a security precaution, but we could re-use these assertions as a safe cache key. Sadly, relatively few sites deploy SRI today, with growth driven by a single source (Shopify), meaning developers aren't generally adopting it for their first-party dependencies.

It turns out this idea has been circulating for a while. Late in the drafting of this post, it was pointed out to me by Frederik Braun that Brad Hill had considered the downsides of a site-populated SRI addressable cache back in 2016. Since that time, SRI adoption has yet to reach a point where it can meaningfully impact cache hit rates. Without pervasive SRI, it's difficult to justify the potential download size hit of a shared cache, not to mention the ongoing required updates.

The size of the proposed cache matters to vendors because browser binary size increases negatively impact adoption. The bigger the download, the less likely users are to switch browser. The graphs could not be more explicit: downloads and installs fail as binaries grow.

Browser teams aggressively track and manage browser download size, and scarce engineering resources are spent to develop increasingly exotic mechanisms to distribute only the code a user needs. They even design custom compression algorithms to make incremental patch downloads smaller. That's how much wire size matters. Shared caches must make sense within this value system.

Back of the Napkin

So what's a reasonable ballpark for a maximum shared cache? Consider:

The last point is crucial because it will cement the cache's contents for years, creating yet another governance quagmire. The space available for cache growth will be what's left for the marginal user after accounting for increases in the browser binary. Most years, that will be nothing. Browser engineers won't give JS bundles breathing room they could use to win users.

Given these constraints, it's impossible to imagine a download budget larger than 20 MiB for a cache. It would also be optional (not bundled with the browser binary) and perhaps fetched on the first run (if resources are available). Having worked on browsers for more than a decade, I think it would be shocking if a browser team agreed to more than 10 MiB for a feature like this, especially if it won't dramatically speed up the majority of existing websites. That is unlikely given the need for SRI annotations or the unbecoming practice of speeding up high-traffic sites, but not others. These considerations put tremendous downward pressure on the prospective size budget for a cache.

20 MiB over the wire expands to no more than 100 MiB of JS with aggressive Brotli compression. A more likely 5 MiB (or less) wire size budget provides something like ~25 MiB of on-disk size. Assuming no source maps, this may seem like a lot, but recall that we need to include many versions of popular libraries. A dozen versions of jQuery, jQuery UI, and Moment (all very common) burn through this space in a hurry.

One challenge for fair inclusion is the tension between site-count-weighting and traffic-weighting. Crawler-based tools (like the HTTP Archive's Almanac and BuiltWith) give a distorted picture. A script on a massive site can account for more traffic than many thousands of long-tail sites. Which way should the policy go? Should a cache favour code that occurs in many sites (the long tail), or the code that has the most potential to improve the median page load?

Thar be more dragons.

Today's Common Libraries Will Be Underrepresented

Anyone who has worked on a modern JavaScript front end is familiar with the terrifyingly long toolchains that dominate the landscape today. Putting aside my low opinion of the high costs and slow results, these tools all share a crucial feature: code motion.

Modern toolchains ensure libraries are transformed and don't resemble the files that arrive on disk from npm. From simple concatenation to the most sophisticated tree-shaking, the days of downloading library.min.js and SFTP-ing one's dependencies to the server are gone. A side effect of this change has been a reduction in the commonality of site artifacts. Even if many sites depend on identical versions of a library, their output will mix that code with bits from other tools in ways that defeat matching hashes.

Advocates for caches suggest this is solvable, but it is not — at least in any reasonable timeframe or in a way that is compatible with other pressures outlined here. Frameworks built in an era that presumes transpilers and npm may never qualify for inclusion in a viable shared cache.

Common Sense

Because of privacy concerns, caches will be disabled for a non-trivial number of users. Many folks won't have enough disk space to include a uniform and complete version, and the cache file will be garbage collected under disk pressure for others.

Users at the "tail" tend not to get updates to their software as often as users in the "head" and "torso" of the distribution. Multiple factors contribute, including the pain of updates on slow networks and devices, systems that are out of disk space, and harmful interactions with AV software. One upshot is that browsers will need to delete or disable shared caches for users in this state so they don't become fingerprinting vectors. A simple policy would be to remove caches from service after a particular date, creating a performance cliff that disproportionately harms those on the slowest devices.

First, Do No Harm

Code distributed to every user is a tax on end-user resources, and because caches must be uniform, complete, and common, they will impose a regressive tax. The most enfranchised users with the most high-performance devices and networks will feel their initial (and ongoing) download impacted the least. In contrast, users on the margins will pay a relatively higher price for any expansions of the cache over time and any differential updates to it.

Induced demand is real, and it widens inequality, rather than shrinking it.

If a shared cache is to do good, it must do good for those in the tail and on the margins, not make things worse for them.

For all of the JS ecosystem's assertions that modern systems are fast enough to support the additional overhead of expensive parallel data structures, slow diffing algorithms, and unbounded abstractions, nearly all computing growth has occurred over the past decade at the low end. For most of today's users, the JS community's shared assumptions about compute and bandwidth abundance have been consistently wrong.

The dream of a global, shared, user-subsidised cache springs from the same mistaken analysis about the whys and wherefores of client-side computing. Perhaps one's data centre is post-scarcity, and maybe the client side will be too, someday. But that day is not today. And it won't be this year either.

Cache Back

Speaking of governance, consider that a shared cache would suddenly create disincentives to adopt anything but the last generation of "winner" libraries. Instead of fostering innovation, the deck would be stacked against new and innovative tools that best use the underlying platform. This is a double disincentive. Developers using JS libraries would suffer an additional real-world cost whenever they pick a more modern option, and browser vendors would feel less pressure to integrate new features into the platform.

As a strategy to improve performance for users, significant questions remain unanswered. Meanwhile, such a cache poses a clear and present danger to the cause of platform progress. The only winners are the makers of soon-to-be obsolete legacy frameworks. No wonder they're asking for it.

Own Goals

At this point, it seems helpful to step back and consider that the question of resource caching may have different constituencies with different needs:

  • Framework Authors may be proposing caching to reduce the costs to their sites or users of their libraries.
  • End Users may want better caching of resources to speed up browsing.

For self-evident security and privacy reasons, browser vendors will be the ones to define the eventual contents of a shared cache and distribute it. Therefore, it will be browser imperatives that drive its payload. This will lead many to be surprised and disappointed at the contents of a fair, shared global cache.

First, to do best by users, the cache will likely be skewed away from the tools of engaged developers building new projects on the latest framework because the latest versions of libraries will lack the deployed base to qualify for inclusion. Expect legacy versions of jQuery and Prototype to find a much established place in the cache than anything popular in "State Of" surveys.

Next, because it will be browser vendors that manage the distribution of the libraries, they are on the hook for creating and distributing derivative works. What does this mean? In short, a copyright and patent minefield. Consider the scripts most likely to qualify based on use: code for embedding media players, analytics scripts, and ad network bootstraps. These aren't the scripts that most people think of when they propose that "[b]rowsers should ship with the top 1 GiB of npm built-in", but they are the sorts of code that will have the highest cache hit ratios.

Plus, they're also copyrighted and unlikely to be open-source, creating new headaches.

Browsers build platform features through web standards, not because it's hard to agree on technical details (although it is), but because vendors need the legal protections that liberally licensed standards provide.[4] These protections are the only things standing between hordes of lawyers and the wallets of folks who build on the web platform. Even OSS isn't enough to guarantee the sorts of patent license and pooling that Standards Development Organisations (SDOs) like the W3C and IETF provide.

A reasonable response would be to have caches constrain themselves to non-copyleft OSS, rendering them less effective.

And, so we hit bedrock. If the goal isn't to make things better for users at the margins, why bother? Serving only the enfranchised isn't what the web is about. Proposals that externalise governance and administrative costs are also unattractive to browser makers. Without a credible plan for deprecation and removal, why wade into this space? It's an obvious trap.

"That's Just Standardisation With Extra Steps!"

Another way to make libraries smaller is to upgrade the platform. New platform features usually let developers remove code, which can reduce costs. This is in the back of browser engineers minds when asked, "Why don't you just put Library X in the browser?".

A workable cache proposal features the same problems as standards development:

  • Licensing limitations
  • Challenging deprecations
  • Opt-in to benefit
  • Limits on what can be added
  • Difficulties agreeing about what to include

Why build this new tool when the existing ones are likely to be as effective, if not more so?

Compression is a helpful lens for thinking about caches and platform APIs. The things that platforms integrate into the de facto computing base are nouns and verbs. As terms become common, they no longer need to be explained every time.

Requests to embed libraries into the web's shared computing base is a desire to increase their compression ratio.

Shared precaching is an inefficient way to accomplish the goal. If we can identify common patterns being downloaded frequently, we can modify the platform to include standardised versions. Either way, developers need to account for situations when the native implementations aren't available (polyfilling).

Given that a shared cache system will look nothing like the dreams of those who propose them, it's helpful to instead ask why browser vendors are moving so slowly to improve the DOM, CSS, data idioms, and many other core areas of the platform in response to the needs expressed by libraries.

Thoughtful browser engineers are right to push back on shared caches, but the slow pace of progress in the DOM (at the hands of Apple's under-funding of the Safari team) has been a collective disaster. If we can't have unicorns, we should at least be getting faster horses.

Is There a Version That Could Work?

Perhaps, but it's unlikely to resemble anything that web developers want. First, let's re-stipulate the constraints previously outlined:

  • Sites will need to opt-in.
  • Caches can't be both large and fair.
  • Caches will not rev or grow quickly.
  • Caches will mainly comprise different versions of "unpopular", legacy libraries.

To be maximally effective, we might want a cache to trigger for sites that haven't opted-in via SRI. A bloom filter could elide SRI annotations for high-traffic files, but this presents additional governance and operational challenges.

Only resources served as public and immutable (as observed by a trusted crawler) can possibly be auto-substituted. A browser that is cavalier enough to attempt to auto-substitute under other conditions deserves all of the predictable security vulnerabilities it will create.

An auto-substitution URL list will take space, and must also be uniform, complete, and common for privacy reasons. This means that the list itself is competing for space with libraries. This creates real favouritism challenges.

A cache designed to do the most good will need mechanisms to work against induced demand. Many policies could achieve this, but the simplest might be for vendors to disable caches for developers and users on high-end machines. We might also imagine adding scaled randomness to cache hits: the faster the box, the more often a cache hit will silently fail.

Such policies won't help users stuck at the tail of the distribution, but might add a pinch of balance to a system that could further slow progress on the web platform.

A workable cache will also need a new governance body within an existing OSS project or SDO. The composition of such a working group will be fraught. Rules that ensure representation by web developers (not framework authors) and browser vendors can be postulated, but governance will remain challenging. How the interests of security researchers and users on the margins are represented are open problems.

So, could we add a cache? If all the challenges and constraints outlined above are satisfied, maybe. But it's not where I'd recommend anyone who wants to drive the web forward invest their time — particularly if they don't relish chairing a new, highly contentious working group.

Thanks to Eric Lawrence, Laurie Voss, Fred K. Schott, Frederik Braun, and Addy Osmani for their insightful comments on drafts of this post.


  1. My assessment of the potential upside of this sort of cache is generally negative, but in the interest of fairness, I should outline some ways in which pre-caching scripts could enable them to be sped up:

    • Scripts downloaded this way can be bytecode cached on the device (at the cost of some CPU burn), but this will put downward pressure on both the size of the cache (as bytecode takes 4-8× the disk space of script text) and on cache growth (time spent optimizing potentially unused scripts is a waste).
    • The benefits of download time scale with the age of the script loading technique. For example, using a script from a third-party CDN requires DNS, TCP, TLS, and HTTP handshaking to a new server, all of which can be shortcut. The oldest sites are the most likely to use this pattern, but are also most likely to be unmaintained.
    ↩︎
  2. Case in point: it was news last year when the Blink API owners[5] approved a plan to deprecate and remove the long-standing alert(), confirm(), and prompt() methods from within cross-origin <iframe>s. Not all <iframe>s would be affected, and top-level documents would continue to function normally. The proposal was scapular — narrowly tailored to address user abuse while reducing collateral damage.

    The change was also shepherded with care and caution. The idea was floated in 2017, aired in concrete form for more than a year, and our friends at Mozilla spoke warmly of it. WebKit even implemented the change. This deprecation built broad consensus and was cautiously introduced.

    It blew up anyway.[6]

    Overnight, influential web developers — including voices that regularly dismiss the prudential concerns of platform engineers — became experts in platform evolution, security UX, nested event loops in multi-process applications, Reverse Origin Trials, histograms, and Chromium's metrics. More helpfully, collaboration with affected enterprise sites is improving the level of collective understanding about the risks. Changes are now on hold until the project regains confidence through this additional data collection.

    This episode and others like it reveal that developers expect platforms to be conservative. Their trust in browsers comes from the expectation that the surface they program to will not change, particularly regarding existing and deployed code.

    And these folks weren't wrong. It is the responsibility of platform stewards to maintain stability. There's even a helpful market incentive attached: browsers that don't render all sites don't have many users. The fast way to lose users is to break sites, and in a competitive market, that means losing share. The compounding effect is for platform maintainers to develop a (sometimes unhelpful) belief that moving glacially is good per se.

    A more durable lesson is that, like a diamond, anything added to the developer-accessible surface of a successful platform may not be valuable — but it is forever.[7] ↩︎

  3. With the advent of H/2 connection re-use and partitioned caches, hosting third-party libraries has become an anti-pattern. It's always faster to host files from your server, which means a shared cache shouldn't encourage users to centralise on standard URLs for hosted libraries, lest they make the performance of the web even worse when the cache is unavailable. ↩︎

  4. For an accessible introduction to the necessity of SDOs and recent history of modern technical standard development, I highly recommend Open Standards and the Digital Age by Andrew L. Russell (no relation). ↩︎

  5. Your humble narrator serves as a Blink API OWNER and deserves his share of the blame for the too-hasty deprecation of alert(), confirm(), and prompt().

    In Blink, the buck stops with us, not the folks proposing changes, and this was a case where we should have known that our lingering "enterprise blindness" [6:1] in the numbers merited even more caution, despite the extraordinary care taken by the team. ↩︎

  6. Responsible browser projects used to shoot from the hip when removing features, which often led to them never doing it due to the unpredictability of widespread site breakage.

    Thankfully, this is no longer the case, thanks to the introduction of anonymised feature instrumentation and metrics. These data sets are critical to modern browser teams, powering everything from global views of feature use to site-level performance reporting and efforts like Core Web Vitals.

    One persistent problem has been what I've come to think of as "enterprise blindness".

    In the typical consumer scenario, users are prompted to opt-in to metrics and crash reporting on the first run. Even if only a relatively small set of users participate, the Law of Large Numbers ensures our understanding of these histograms is representative across the billions of pages out there.

    Enterprises, meanwhile, tend to roll out software for their users and simultaneously push down policy configurations to machines that disable metrics reporting. The result is that enterprise users and the web applications they frequent are dramatically under-reported in the public stats.

    Given the data at hand, the team deprecating cross-origin <iframe> prompts was being responsible. It's just that the data had a blind spot, one whose size has been maddeningly difficult to quantify. ↩︎ ↩︎

  7. Forever, give or take half a decade. ↩︎

Towards a Unified Theory of Web Performance

Note: This post first ran as part of Sergey Chernyshev and Stoyan Stefanov's indispensible annual series. It's being reposted here for completeness, but if you care about web performance, make sure to check out the whole series and get subscribed to their RSS feed to avoid missing any of next year's posts.

In a recent Perf Planet Advent Calendar post, Tanner Hodges asked for what many folks who work in the space would like for the holidays: a unified theory of web performance.

I propose four key ingredients:

  • Definition: What is "performance" beyond page speed? What, in particular, is "web performance"?
  • Purpose: What is web performance trying to accomplish as a discipline? What are its goals?
  • Principles: What fundamental truths are guiding the discipline and moving it forward?
  • Practice: What does it look like to work on web performance? How do we do it?

This is a tall order!

A baseline theory, doctrine, and practicum represent months of work. While I don't have that sort of space and time at the moment, the web performance community continues to produce incredible training materials, and I trust we'll be able to connect theory to pracice once we roughly agree on what web performance is and what it's for.

This Is for Everyone

Missing alt text
Tim Berners-Lee tweets that 'This is for everyone' at the 2012 Olympic Games opening ceremony using the NeXT computer he used to build the first browser and web server.
Tim Berners-Lee tweets that 'This is for everyone' at the 2012 Olympic Games opening ceremony using the NeXT computer he used to build the first browser and web server.

Embedded in the term "web performance" is the web, and the web is for humans.

That assertion might start an argument in the wrong crowd, but 30+ years into our journey, attempts to promote a different first-order constituency are considered failures, as the Core Platform Loop predicts. The web ecosystem grows or contracts with its ability to reach people and meet their needs with high safely and low friction.

Taking "this is for everyone" seriously, aspirational goals for web performance emerge. To the marginal user, performance is the difference between access and exclusion.

The mission of web performance is to expand access to information and services.

Page Load Isn't Special

It may seem that web performance comprises two disciplines:

  1. Optimising page load
  2. Optimising post-load interactions

The tools of performance investigators in each discipline overlap to some degree but generally feel like separate concerns. The metrics that we report against implicitly cleave these into different "camps", leaving us thinking about pre- and post-load as distinct universes.

But what if they aren't?

Consider the humble webmail client.

Here are two renderings of the same Gmail inbox in different architectural styles: one based on Ajax, and the other on "basic" HTML:

The Ajax version of Gmail with two messages
The Ajax version of Gmail loads 4.8MiB of resources, including 3.8MiB of JavaScript to load an inbox containing two messages.

The same inbox in Gmail's 'simple HTML' mode
The 'basic' HTML version of Gmail loads in 23KiB, including 1.3KiB of JavaScript.

The difference in weight between the two architectures is interesting, but what we should focus on is the per interaction loop. Typing gmail.com in the address bar, hitting Enter, and becoming ready to handle the next input is effectively the same interaction in both versions. One of these is better, and it isn't the experience of the "modern" style.

These steps inform a general description of the interaction loop:

  1. The system is ready to receive input.
  2. Input is received and processed.
  3. Progress indicators are displayed.
  4. Work starts; progress indicators update.
  5. Work completes; output is displayed.
  6. The system is ready to receive input.

Tradeoffs In Depth

Consider the next step of our journey, opening the first message. The Ajax version leaves most of the UI in place, whereas the HTML version performs a full page reload. Regardless of architecture, Gmail needs to send an HTTP request to the server and update some HTML when the server replies. The chief effect of the architectural difference is to shift the distribution of latency within the loop.

Some folks frame performance as a competition between Team Local (steps 2 & 3) and Team Server (steps 1 & 4). Today's web architecture debates (e.g. SPA vs. MPA) embody this tension.

Team Local values heap state because updating a few kilobytes of state in memory can, in theory, involve less work to return to interactivity (step 5) while improving the experience of steps 2 and 3.

Intuitively, modifying a DOM subtree should generate less CPU load and need less network traffic than tearing down the entire contents of a document, asking the server to compose a new one, and then parsing/rendering it along with all of its subresources. Successive HTML documents tend to be highly repetitive, after all, with headers, footers, and shared elements continually re-created from source when navigating between pages.

But is this intuitive understanding correct? And what about the other benefits of avoiding full page refreshes, like the ability to animate smoothly between states?

Herein lies our collective anxiety about front-end architectures: traversing networks is always fraught, and so we want to avoid it being jarring. However, the costs to deliver client-side logic that can cushion the experience from the network latency remain stubbornly high. Improving latency for one scenario often degrades it for another. Despite partisan protests, there are no silver bullets; only complex tradeoffs that must be grounded in real-world contexts — in other words, engineering.

As a community, we aren't very good at naming or talking about the distributional effects of these impacts. Performance engineers have a fluency in histograms and percentiles that the broader engineering community could benefit from as a lens for thinking about the impacts of design choices.

Given the last decade of growth in JavaScript payloads, it's worth resetting our foundational understanding of these relative costs. Here, for instance, are the network costs of transitioning from the inbox view of Gmail to a message:

Missing alt text
Displaying the first message requires 82KiB of network traffic in the Ajax version of Gmail, half of which are images embedded in the message.
Displaying the first message requires 82KiB of network traffic in the Ajax version of Gmail, half of which are images embedded in the message.

Displaying a message in the 'basic' HTML version requires a full page refresh.
Displaying a message in the 'basic' HTML version requires a full page refresh.

Despite fully reloading the page, the HTML version of Gmail consumes fewer network resources (~70KiB) and takes less overall time to return to interaction.
Despite fully reloading the page, the HTML version of Gmail consumes fewer network resources (~70KiB) and takes less overall time to return to interaction.

Objections to the comparison are legion.

First, not all interactions within an email client modify such a large portion of the document. Some UI actions may be lighter in the Ajax version, especially if they operate exclusively in the client-side state. Second, while avoiding a full-page refresh, steps 2, 3, and 4 in our interaction loop can be communicated with greater confidence and in a less jarring way. Lastly, by avoiding an entire back-and-forth with the server for all UI states, it's possible to add complex features — like chat and keyboard accelerators — in a way that doesn't incur loss of context or focus.

The deeper an app's session length and the larger the number of "fiddly" interactions a user may perform, the more attractive a large up-front bundle can be to hide future latency.

This insight gives rise to a second foundational goal for web performance:

We expand access by reducing latency and variance across all interactions in a user's session to more reliably return the system to an interactive state.

For sites with low interaction depths and short sessions, this implies that web performance engineering might remove as much JavaScript and client-side logic as possible. For other, richer apps, performance engineers might add precisely this sort of payload to reduce session-depth-amortised latency and variance. The tradeoff is contextual and informed by data and business goals.

No silver bullets, only engineering.

Medians Don't Matter

Not all improvements are equal. To understand impacts, we must learn to think in terms of distributions.

Our goal is to minimise latency and variance in the interactivity loop... but for whom? Going back to our first principle, we understand that performance is the predicate to access. This points us in the right direction. Performance engineers across the computing industry have learned the hard way that the sneaky, customer-impactful latency is waaaaay out in the tail of our distributions. Many teams have reported making performance better at the tail only to see their numbers get worse upon shipping improvements. Why? Fewer bouncing users. That is, more users who get far enough into the experience for the system to boot up in order to report that things are slow (previously, those users wouldn't even get that far).

Tail latency is paramount. Doing better for users at the median might not have a big impact on users one or two sigmas out, whereas improving latency and variance for users at the 75th percentile ("P75") and higher tend to make things better for everyone.

As web performance engineers, we work to improve the tail of the distribution (P75+) because that is how we make systems accessible, reliable, and equitable.

A Unified Theory

And so we have the three parts of a uniform mission, or theory, of web performance:

  • The mission of web performance is to expand access to information and services.
  • We expand access by reducing latency and variance across all interactions in a user's session to more reliably return the system to an interactive state.
  • We work to improve the tail of the distribution (P75+) because that is how we make systems accessible, reliable, and equitable.

Perhaps a better writer can find a pithier way to encapsulate these values.

However they're formulated, these principles are the north star of my performance consulting. They explain tensions in architecture and performance tradeoffs. They also focus teams more productively on marginal users, which helps to direct investigations and remediation work. When we focus on getting back to interactive for the least enfranchised, the rest tends to work itself out.

Minimum Standards for iOS Browser Competition

There has been a recent flurry of regulatory, legislative, and courtroom activity regarding mobile OSes and their app stores. One emergent theme is Apple's shocking disregard for the spirit of legal findings it views as adverse to its interests.

Take, for instance, Apple's insistence that it would take "months" to support the addition of links to external payment providers from within apps, nevermind it had months of notice. There is a case to be made that formulating policy and constructing a comissions system takes time, but this is ridiculous.

Just sit with Apple's claim a moment. Cupertino is saying that it will take additional months to allow other people to add links within their own apps.

Or consider South Korea's law, passed last August, that elicited months of stonewalling by Google and Apple until, at last, an exasperated KCC started speaking a language Cupertino understands: fines. Having run the clock for half a year, Apple has started making noises that indicate a willingness to potentially allow alternative payment systems at some point in the future.

Never fear, though; there's no chance that grudging "compliance" will abide the plain-language meaning of the law. Apple is fully committed to predatory delay and sneering, legalistic, inch-measuring conformance. Notably, it signaled it feels entitled to skim equivalent revenues to it's monopoly rents for payments paid through third-party processors in both Korea and the Netherlands. If regulators are going to bring this to heel, they will need to start assuming bad-faith.

Dutch regulators have been engaged in a narrowly-focused enquiry into the billing practices of dating apps, and they came to remarkably similar conclusions: Apple's pracctices are flagarantly anti-competitive.

In a malign sort of consistency, Apple responded in the most obtuse, geographically constrained, and difficult-to-use solution possible. Developers would need to submit a separate, country-specific version of their app, only available from the Dutch version of the App Store, and present users with pejorative messaging. Heaven help users attempting to navigate such a mess.

This "solution" was poorly received. Perhaps learning from the KCC's experience, Dutch regulators moved to impose fines more quickly, but perhaps misjudged how little 50 million EUR is to a firm that makes 600× as much in profit per quarter. It certainly hasn't modulated Apple's duplicitous audacity.

Cupertino's latest proposed alternative to its 30% revenue cut will charge developers that use external payment processors a 27% fee which, after the usual 3% credit card processing fee... well, you can do the math.

Regulators in the low countries are rightly incensed, but Apple's disrespect for the rule of law isn't going to be reformed by slowly deployed half-measures, one vertical at a time. Structural change is necessary, and the web could bring that change if it is unshackled from Apple's slipshod browser engine.

A Floor for Functional Browser Choice

With all of this as context, we should seriously consider how companies this shameless will behave if required to facilitate genuine browser choice. What are the policy and technical requirements regulators should set to ensure fairness? How can the lines be drawn so delay and obfuscation aren't used to scuttle capable competitors? How can regulators anticipate and get ahead of brazenly bad-faith actions, not only by Apple, but Google and Facebook as well?

Geographic Games

One oligopolist response has been to change anti-competitive behaviour only within small markets, e.g. making changes only to the "dating" category of apps and only within the Netherlands. Apple and Google calculate that they can avoid the Brussels Effect by making a patchwork set of market-specific changes. A goal of this strategy is to confuse users and make life hard for rebellious developers by tightly drawing "fixes" around the letter of the law in each jurisdiction.

While technically meeting legal requirements, these systems will be so hard to use that residents and businesses blame regulators, rather than store proprietors for additional day-to-day inconvenience. Because they're implemented in software, this market well-poisoning is cost-free for Apple. It also buys bad-faith actors months of delay on substantive change while they negotiate with regulators.

Could regulators include language that stipulates how market fairness requirements cannot be met with country-specific versions of apps or capabilities? This quickly hits jurisdictional boundaries, likely triggering years of court appeals. This is undesirable as delay is the ne'er do well's friend.

Regulators generally have scope over commerce within their territorial borders. Multilateral treaty organisations like the the WTO have supranational jurisdiction but no appetite or the treaty scope to tackle firm-level competition issues. They focus instead on tariffs and "dumping" practices that privilege one nation's industries over another, as those are the sorts of disputes national laws cannot address.

A More Durable Approach

Effective regulation needs market-opening technologies that function without constant oversight. The lines drawn around undermining this technology should be so clear, and the consequences for stepping over them so painful, that even Apple, Google, and Facebook dare not go near them.

When regulators adopt similar (if not identical) regulations they increase the costs to bad actors of country-specific gamesmanship. Regulators that "harmonise" their interventions multiply the chances of compliance, creating a "Brussels of the Willing".

A competitive landscape for web browsers should be part of any compliance framework because:

  • A world with safe, capable web apps provides businesses alternatives to app store gatekeepers.
  • Browser competition (enabled by meaningful choice) has consistently delivered safety and control to users far ahead of operating systems. Compare, for instance, the late addition of mobile OS controls for location tracking, ad blocking, and bluetooth compared with the web's more consent-oriented track record.
  • Web apps put pricing pressure on platform owners, forcing them to innovate rather than extract rents. Many classes of apps trapped in app stores only use standardised APIs that happen to be spelled differently on proprietary OSes. They could be delivered just as well through the web's open, interoperable, and liberally licensed standards, but for gatekeepers denying those features to browser and scuppering app discoverability on the web.
  • Web applications create portability for developers and businesses, lowering costs and improving access to services. This enhances the bargaining power of small firms relative to platform owners.

For the web to perform these essential market functions on mobile, regulation must disrupt the status quo and facilitate competition from within. This also provides a solution to user safety and security concerns that pervasive sideloading may raise, as browsers are aggressively sandboxed. Meanginful choice, coupled with powerful browsers, can deliver better outcomes:

Table Stakes

Discussion of sideloading and alternative app stores often elides requirements that regulators should put in place to create competition amongst capable browsers. I have previously proposed a set of minimal interventions to ensure meaningful user choice. To restate the broadest points:

  • Platform vendors' own products must respect users' browser choices within applications which are not themselves browsers.
  • Mobile OSs should provide a simple, global way to opt-out of "in-app browsers" across all applications.
  • Developers must be provided with a simple way for their content to opt-out of being loaded via "in-app browsers".
  • System-level "in-app browser" protocols and APIs should allow a user's default browser to render sites by default.

iOS Specifics

Because Apple's iOS is egregiously closed to genuine browser competition, regulators should pay specific attention to the capabilities that vendors that port engines will need. They should also ensure other capabilities are made available by default; the presumption for browser developers must be open access. Apple has shown itself to be a serial bad-faith actor regarding competition and browser choice, so while an enumeration of these features may seem pedantic, it sadly also seems necessary.

Today, Apple's Safari browser enjoys privileged access to certain APIs necessary for any competing browser vendor that wants to match Safari's features. Only Safari can:

  • Construct new sub-processes for sandboxing web content. Competing browsers will need to do the same, and be allowed to define a tighter sandbox policy than Apple's default (as they already do on Windows, macOS, and Android).
  • JIT JavaScript code. For reasons covered extensively last year, there's no legitimate reason to disallow competing browsers from running at full speed.
  • Install Web Apps to the iOS homescreen. Competing browsers must be allowed to match Safari's capability that allows it to install PWAs to the device's homescreen and serve as the runtime for those apps.
  • Integrate with SFSafariViewController. Competing browsers set as the user's default must be allowed to also handle "in-app browsing" via the SFSafariViewController protocol without requiring the user to opt-in.
  • Provide its own networking layer and integrate with Private Relay's closed APIs. Competing browsers must be allowed access to OS-integrated capabilities without being forced to use Apple's slower, less reliable networking stack.

As a general rule, competing browsers must also be allowed access to all private and undocumented APIs that are used by Safari, as well as iOS entitlements granted to other applications.

Regulators must also ensure capabilities are not prohibited or removed from browsers by secret agreements that Cupertino forces developers to sign. Further (and it's a shame this has to be said), Apple must not be allowed to comply with these terms by further neutering Safari's already industry-trailing feature set.

Apple must also be required to allow browsers with alternative engines to be procured directly through its App Store. It is easy to predict a world of country-specific sideloading regulations, with Apple attempting to blunt the impact of competitive browsers by continuing to banish them from their "legit" discovery surface.

Web browsers must also be allowed to implement the Web Payments API without being forced to use Apple Pay as the only back end. Apple must further be enjoined from requiring specific UI treatments that subvert these flows and prejudice users away from open payment systems.

Lastly, Apple must not be allowed to publish new versions of browsers through an arbitrary and capricious "review" process. Regulators must demand that Apple be forced to publish new browser versions and, if it objects to features within them, file a request for regulatory arbitration of the dispute post publication. Apple has long demonstrated it cannot be trusted with the benefit of the doubt in this area, and allowing updates to flow quickly is critical to ensuring users of the web remain safe.

Only within the contours of this sort of regime can ongoing enforcement of negotiated policy proceed in good faith.

A Week to Define the Web for Decades

If you live or do business in the UK or the US, what you do in the next seven days could define the web for decades to come. By filing public comments with UK regulators and US legislators this week, you can change the course of mobile computing more than at any other time in the past decade. Read on for why this moment matters and how to seize the day.

By way of background, regulators in much of the anglophone world (and beyond) spent much of 2021 investigating the state of the mobile ecosystem.

This is important because Apple has succeeded in neutering the web's potential through brazenly anti-competitive practices and obfuscation. Facebook and Google, meanwhile, have undermined user agency in browser choice for fun and profit.

I kid.

It was all for profit:

Public statements from leading authorities who have looked into this behaviour leave a distinct impression of being unimpressed. Here's the unflappably measured UK Competition and Markets Authority (CMA) weighing in last month:

Apple and Google have developed a vice-like grip over how we use mobile phones and we're concerned that it's causing millions of people across the UK to lose out.

The CMA's 400+ page interim report (plus an additional ~200 pages of detailed appendices) didn't make the waves it deserved when it was released near the winter holidays.[1] That's a shame as the report is by turns scathing and detailed, particularly in its proposed remedies, all of which would have a profoundly positive impact on you, me, and anyone else who uses the mobile web:

The report sets out a range of actions that could be taken to address these issues, including:

  • Making it easier for users to switch between iOS and Android phones when they want to replace their device without losing functionality or data.
  • Making it easier to install apps through methods other than the App Store or Play Store, including so-called "web apps".
  • Enabling all apps to give users a choice of how they pay in-app for things like game credits or subscriptions, rather than being tied to Apple's and Google's payment systems.
  • Making it easier for users to choose alternatives to Apple and Google for services like browsers, in particular by making sure they can easily set which browser they have as default.

This is shockingly blunt language from a regulatory body:

Our market study has provisionally found that:

❌ People aren’t seeing the full benefit of innovative new products and services such as cloud #gaming and web #apps.

[2/5]

Our provisional findings also suggest:

💷 customers could be facing higher prices than they would in a more competitive market.

[3/5]

The report demonstrates that the CMA understands the anti-competitive browser and browser-engine landscape too. Its findings are no less direct than the summary:

Impact of the WebKit restriction

As a result of the WebKit restriction, there is no competition in browser engines on iOS and Apple effectively dictates the features that browsers on iOS can offer[.]

The CMA has outlined its next steps and is requesting comment until February 7th, 2022.

Apple, in particular, has demonstrated that it is a bad actor with regards to competition law. This post could easily contain nothing but a rundown of fruity skulduggery; that's how brazen Cupertino's anti-competitive practices have become. Suffice to say, Apple sees being fined 5M EUR per week over entirely reasonable requests a "cost of doing business." Big respect-for-the-rule-of-law vibes.

But this sort of thing isn't going to last. Regulators don't like being taken for a ride.

...Meanwhile in Washington

On this side of the pond, things are getting serious. In just the past two weeks:

We're even getting gauzy coverage of pro-regulatory senators. It's quite the moment, and indicates dawning awareness of these blatantly anti-competitive practices.

This Is About More Than Browsers

It's tempting to think of browser choice and app store regulation as wholly separate concerns, but neither the web nor apps exist in a vacuum. As the desktop web becomes ever-more powerful on every OS, even the most sophisticated app developers gain more choice in how they reach users.

Unleashing true choice and competition in mobile browsers won't only help web developers and users, it will level the playing field more broadly. Native app developers that feel trapped in abusive policy regimes will suddenly have real alternatives. This, in turn, will put pricing pressure on app store owners that extract egregious rents today.

Web apps and PWAs compete with app stores for distribution, lowering the price to develop and deliver competitive experiences. This allows a larger pool of developers and businesses to "play".

App store "review" and its capricious policy interpretations have always been tragicomic, but true competition is needed to drive the point home. Businesses are forced into the app store, requiring they spend huge amounts to re-build features multiple times. Users risk unsafe native app platforms when the much-safer web could easily handle many day-to-day tasks. We're only stuck in this morass because it helps Google and Apple build proprietary moats that raise switching costs and allow them to extort rents from indie developers and hapless users.

A better future for mobile computing is possible when the web is unshackled, and that will only happen when competition has teeth.

What You Can Do

This is the last week to lodge comment by email with the UK's CMA regarding the findings of its interim report. Anyone who does business in the UK and cares about mobile browser choice should send comments, both as an individual and through corporate counsel.

For US residents, the speed at which legislation on this front is moving through Congress suggests that this is the moment for a well-timed email or, more preferably, call to your elected senator.

If you live or do business in the US or the UK, this week matters.

Whichever geography you submit comment to, please note that regulators and legislators have specific remits and will care more or less depending on the salience of your input to their particular goals. To maximize your impact, consider including the following points in your comments:

  • Your residence and business location within the district they serve (if appropriate)
  • How a lack of choice, including missing features and an endless parade of showstopping bugs, have hurt your business or forced you to accept unfair app store terms
  • Support of specific provisions in their proposals, particularly regarding true browser choice
  • The number of employees in your firm or the amount of business done annually in their geography
  • The extent to which you service export markets with your technology
  • The specific ways in which unfair App Store, Play Store, or browser choice policies have negatively impacted your business (think lost revenue, increased costs, bugs, etc.)
  • Your particular preferences regarding competition both on the web (e.g., the availability of alternative browser engines on iOS) and between the web and native (e.g., the inability to offer a lower-cost, higher service web experience vs. being forced into app stores)
  • Specific missing features and issues that cause you ongoing business harm
  • If you are contacting a politician or their office, your willingness to vote on the issue

Leaving your contact information for follow-up and verification never hurts either.

It's been 761 weeks since Apple began the destruction of mobile browser choice, knowingly coating its preference for a "native" device experience (in web-friendly garb) at the expense of the mobile web. This is the week you can do something about it.

Carpe diem.


  1. While the tech press may have been asleep at the wheel, Bruce Lawson covered the report's release. Read his post for a better sense of the content without needing to wade through 600+ pages. ↩︎