Reading List

The most recent articles from a list of feeds I subscribe to.

How Do Committees Fail To Invent?

Mel Conway's seminal paper "How Do Committees Invent?" (PDF) is commonly paraphrased as Conway's Law:

Organizations which design systems are (broadly) constrained to produce designs which are copies of the communication structures of these organizations.

This is deep organisational insight that engineering leaders ignore at their peril, and everyone who delivers code for a living benefits from a (re)read of "The Mythical Man-Month", available at fine retailers everywhere.

In most contexts, organisations invoking it are nominally engaged in solving a defined problem, and everyone participating is working towards a solution. But what if there are defectors? And what if they can prevent all forward progress without paying any price? This problem is rarely analysed for the simple reason that such an organisation would be deranged.

But what if that happened regularly?

I was reminded of the possibility while chatting with a colleague joining a new (to them) Working Group at the W3C. The most cursed expressions of Conway's Law regularly occur in Standards Development Organisations (SDOs); specifically, when delegates refuse to communicate their true intentions, often through silence.

This special case is the fifth column problem.

Expressed in Conwayist terms, the fifth column problem describes the way organisations mirror the miscommunication patterns of their participants when they fail to deliver designs of any sort. This happens most often when the goal of the majority is antithetical to a small minority who have a veto.

Reticence of certain SDO participants to consider important problems is endemic because of open membership. Unlike corporate environments where alignment is at least partially top-down, the ability of any firm to join an SDO practically invites subterfuge. This plays out through representatives of pivotal firms obfuscating their willingness to implement, or endlessly delaying consideration of important designs.

Open membership alone would not lead to crisis, except that certain working groups adopt rules or norms that create an explosion of veto players.

These veto-happy environments combine with bad information to transmute opaque communication into arterial blockage. SDO torpidity, in turn, colours the perception of web developers against standards. And who's to say they're wrong? Bemoaning phlegmatic working groups is only so cathartic. Eventually, code has to ship, and if important features are missing from standards-based platforms, proprietary alternatives become inevitable.

Instances of the fifth column problem are frequently raised to me by engineers working in these groups. "How," they ask, "can large gatherings of engineers meet regularly, accomplish next-to-nothing, yet continue pat themselves on the back?"

The sad answer is "quite easily."

This dynamic has recurred with surprising regularity over the web's history,1 and preventing it from clogging the works is critical to the good order of the entire system.

The primary mechanism that produces consequential fifth column episodes is internal competition within tech giants.

Because the web is a platform, and because platforms are competitions, it's natural that companies which produce both browsers and proprietary platforms favour their owned-and-operated offerings.

The rent extraction opportunities from controlling OS APIs are much more lucrative than an open ecosystem's direct competition. Decisions about closed platforms also do not have to consider other players as potential collaborators. Strategically speaking, managing a proprietary platform is playing on easy mode. When that isn't possible, the web can provide a powerful bridge to the temporarily embarrassed monopolist, but that is never their preference.

This competition is often hard to see because the jousting between proprietary systems and open platforms takes place within the most heavily guarded processes of tech giants: budget planning.

Because budgets are secret, news of the web losing status is also a secret. It's a firing offence to leak budget details, or even share internally to those without a need to know, so line engineers may never be explicitly told that their mission has fundamentally changed. They can experience transitions away from the web as "Plan A" to "Plan Z" without their day-to-day changing much. The team just stops growing, and different types of features get priority, but code is still being written at roughly the same rate.

OS vendors that walk away from the web after it has served their businesses can simply freeze browser teams, trimming ambitions around the edges without even telling the team that their status has been lowered. Capping and diverting funding away from new capabilities is easily explained; iteration on existing, non-threatening features is always important, after all, and there are always benchmarks to optimise for instead.

Until recently, the impact of fifth column tactics was obscured in a haze of legacy engines. Web developers take universality seriously, so improvements in capabilities have historically been experienced as the rate at which legacy UAs fall below a nominal relevance threshold.2 Thanks to pervasive auto-updates, the historical problem of dead-code-walking has largely been resolved. Major versions of important browsers may have entire cradle-to-grave lifecycles on the order of two or three months, rather than IE 6's half-decade half-life.

This has clarified the impact of (dis)investments by certain vendors and makes fifth columnists easier to spot. Web developers now benefit from, or are held back by, recent decisions by the companies (under)funding browser development. If sites remain unable to use features launched in leading-edge engines, it's only because of deficiencies in recent versions of competing engines. This is a much easier gap to close — in theory, anyway.

Web standards are voluntary, and the legal structures that create SDO safe-harbours (PDF) create the space in, and rules under, which SDOs must operate. SDOs may find their designs written into legislation after the fact, or as implementation guides, but there is a strong aversion to being told what to design by governments within the web community, particularly among implementers. Some new participants in standards arrive with the expectation that the de jure nature of a formal standard creates a requirement for implementation, but nothing could be further from fact. This sometimes leads to great frustration; enshrining a design in a ratified standard does not obligate anyone to do anything, and many volumes of pure specifiction have been issued over the decades to little effect.

The voluntary nature of web standards is based on the autonomy of browsers, servers, and web developers to implement whatever they please under own brand.

Until Apple's flagrantly anticompetitive iOS policies, this right was viewed as inviolable because, when compromised, erosions of feature sovereignty undermines the entire premise of SDOs.

When products lose the ability to differentiate on features, quality, performance, safety, and standards conformance, the market logic underpinning voluntary standards becomes a dead letter. There's no reason to propose an improvement to the collective body of the web when another party can prevent you from winning share by supporting that feature.

The harms of implementation compellence are proportional to market influence. iOS's monopoly on the pockets of the wealthy (read: influential) has decisively undermined the logic of the open internet and the browser market. Not coincidentally, this has also desolated the prospect of a thriving mobile web.

The Mobile Web: MIA

It's no exaggeration to say that it is anti-web to constrain which standards vendors can implement within their browsers, and implementation coercion is antithetical to the good functioning of SDOs and the broader web ecosystem.3

In a perverse way, Apple's policy predations, strategic under-investment, and abusive incompetence clarify the basic terms of the web's power structure: SDOs are downstream of browser competition, and browser competition depends on open operating systems.

Why do vendors spend time and money to work in standards, only to give away their IP in the form of patent licences covering ratified documents?

When vendors enjoy product autonomy, they develop standards to increase interoperability at the behest of customers who dislike lock-in. Standards also lower vendor's legal risk through joint licensing, and increase marketability of their products. Behind this nondescript summary often lies a furious battle for market share between bitter competitors, and standards development is famous for playing a subtle role. Shapiro and Varian's classic paper "The Art of Standards Wars" (PDF) is a quarter-century old now, but its description of these sorts of battles is no less relevant today that it was in '99.

Like this classic, many discussions of standards battles highlight two parties with differing visions — should margin-sizing or border-sizing be the default? — rather than situations where one vendor has a proprietary agenda. These cases are under-discussed, in part, because they're hard to perceive in short time windows or by evaluating only at one standard or working group.

Parties that maintain a footprint in standards, but are unhappy to see standards-based platforms compete with their proprietary offerings, only need a few pages from the Simple Sabotage manual (PDF).

Often, they will send delegates to important groups and take visible leadership positions. Combined with juuuuuust enough constructive technical engagement, obstreperous parties scarcely have to say anything about their intent. Others will raise their own hopes, and cite (tepid) participation as evidence of good faith. The fifth columnist doesn't need to raise a finger, which is handy, as doing nothing is the goal.4

Working group composition also favours delays at the hands of fifth columnists. Encouraged by defectors, they regularly divert focus from important problems, instead spending huge amounts of time on trivialities because few customers (web developers) have the time, money, and energy to represent their own needs in committee. Without those voices, it's hard to keep things on track.

Worse, web developers generally lack an understanding of browser implementation details and don't intone the linguistic airs and shorthands of committeespeak, which vary from venue to venue. This hampers their ability to be taken seriously if they do attend. At the limit, dismissing pressing problems on technicalities can become something of a committee pastime.5

There's also a great deal of opportunity for fifth columnists to misrepresent the clearly stated needs of web developers, launching projects to solve adjacent (but unimportant) sub-issues, while failing to address the issues of the day. This is particularly problematic in big, old rooms.

A competent fifth columnist only needs to signal in-group membership to amplify the impact of their own disinterest in topics they would prefer to avoid. Ambiguous "concerns" and scary sounding caveats are raised, and oven-ready designs which do arrive are reframed as naive proposals by outsiders. Process-level critique, in lieu of discussing substance, is the final line of defence.

Deflecting important work is shockingly easy to pull off because organisations that wish to defeat progress can send delegates that appealing to rooms full of C++/Rust engineers as equals. The tripwires in web API design are not obvious to the uninitiated, so it's easy to move entire areas of design off the agenda through critique of small, incongruent details.

The most depressing thing about this pattern is that these tactics work because other vendors allow it.

One problem facing new areas in standards is that chartered Working Groups are just that: chartered. They have to define what they will deliver years in advance, and anything not on the agenda is, by definition, not in scope. The window in many SDO processes for putting something new into the hopper is both short and biased towards small revisions of the existing features. Spinning up new Working Groups is a huge effort that requires political clout.

Technically, this is a feature of SDOs; they jointly licence the IP of members to reduce risks to implementers and customers of adopting standards-based products. Patent trolls have to consider the mutual defences of the whole group's membership and cannot easily pick off smaller prey. Given that most (customers) web developers never give a second thought to patent portfolios, and have never been personally sued for infringement — a sign web standards are a smashing success — they are unlikely to understand that the process is designed with bounded IP commitments in mind.

This creates a tension: over the long term, SDOs and the ecosystems can only succeed if they take on new problems that are adjacent to the current set, but the Working Groups they create are primed by their composition and history to avoid taking on substantial expansions of their scope. After all, a good v2 of a spec is one that fixes all the problems of v1 and introduces relatively few new ones.

To work around this restriction, functional SDOs create incubation venues. These take different guises, but the core features are the same. Unlike chartered Working Groups, incubation groups are simple to create; no charter votes or large, up-front IP commitments. They also feature low bars to participation, can be easily shut down, and do not produce documents for formal standardization, although they can produce "Notes" or other specification documents that Working Groups can take up.

Instead, they tend to have substantial contributor-only grants of IP, ad-hoc meeting and internal debate mechanisms, and attract only those interested in working on solutions in a new problem space. In functioning SDOs, such "fail-fast" groups naturally become feeders for chartered Working Groups, iterating on problems and solutions at a rate which is not possible under the plodding bureaucracy of a chartered Working Groups's minuted and agenda-driven meeting cadence.

And that's why these sorts of groups are a first priority for sabotage by fifth columnists. The usual tactics deployed to subvert incubation include:

  • Aspersions of bad faith by those working in incubation venues, either on the grounds that the groups are "amateur", "not standards-track",6 or "do not have the right people."
  • Avoidance of engagement in incubation groups, robbing them of timely feedback while creating the self-fulfilling lack of expertise critique.
  • Claiming that a high fraction of designs that fail in these groups as an indicator that they are not useful, rather than the reality that the entire point of incubation is to fail fast and iterate furiously.
  • Accuse those who implement incubated proposals of "not following the process", "ignoring standards", or "shipping whatever they want"; twisting the goals of those doing design in the open, in good faith, under the SDO's IP umbrella.
  • Demanding formalities akin to chartered Working Groups to slow the pace of design progress in incubation venues that are too successful to ignore.

The fifth columnist also works behind the scenes to reduce the standing and reputation of incubation groups among the SDO's grandees, claiming that they represent a threat to the stability of the overall organisation. Because that constituency is largely divorced from the sausage-making, this sort of treachery works more often than it should, causing those who want to solve problems to burn time defending the existence of venues where real progress is being made.

The picture presented this far is of Working Groups meeting in The Upside Down. After all, it's only web developers who can provide a real test of a design, or even the legitimacy of a problem.

This problem becomes endemic in many groups, and entire SDOs can become captured by the internal dramas and preferences of implementers, effectively locking customers out. Without more dynamic, fail-fast forums that enable coalitions of the willing to design and ship around obstructionists, working groups can lay exclusive claim to important technologies and retreat into irrelevance without paying a reputational cost.

The alternative — hard forking specifications — is a nuclear option. The fallout can easily blow back into the camp of those launching a fork, and the effort involved is stupendous. Given the limited near-term upside and unclear results, few are brave or foolish enough to consider forking to route around a single intransigent party.

This feeds the eclipse of an SDO's relevance because legitimacy deficits become toxic to the host only slowly. Risk of obsolescence can creep unnoticed until it's too late. As long as the ancient forms and traditions are followed, a decade or more can pass before the fecklessness of an important group rises to the notice of anyone with the power to provoke change. All the while, external observers will wonder why they must resort to increasingly tall piles of workarounds and transpilers. Some may even come to view deadening stasis, incompatibility, and waste as the natural state of affairs, declining to invest any further hope for change in the work of SDOs. At this point, the fifth columnist has won.

One of the self-renewing arrows in the fifth column's arsenal is the tendency of large and old working groups to indulge in survivorship bias.

Logically, there's no reason why folks whose designs won a lottery in the last round of market jousting7 should be gatekeepers regarding the next tranche of features. Having proposed winning designs in the past is not, in itself, a reliable credential. And yet, many of these folks become embedded within working groups, sometimes for decades, holding sway by dint of years of service and interpersonal capital. Experience can be helpful, but only when it is directed to constructive engagement, and too many group chairs allow bad behavior, verging on incivility, at the hands of la vieille garde. This, of course, actively discourages new and important work, and instead clears the ground for yet more navel-gazing.

This sort of in-group/out-group formation is natural in some sense, and even folks who have loathed each other from across a succession of identically drab conference rooms for years can find a sort of camaraderie in it. But the social lives of habitual TPAC and TC39 attendees are no reason to accept unproductive monopolies on progress; particularly when the folks in those rooms become unwitting dupes of fifth columnists, defending the honour of the group against those assailing it for not doing enough.

The dysfunctions of this dynamic mirrors those of lightly moderated email lists: small rooms of people all trying to solve the same problem can be incredibly productive, no matter how open. Large rooms with high alignment of aims can make progress if leadership is evident (e.g., strong moderation). What is reliably toxic are large, open rooms with a mix of "old timers" who are never moderated and "newbies" who have no social standing. Without either a clear destination or any effective means of making decisions, these sorts of venues become vitriolic over even the slightest things. As applied to working groups, without incredibly strong chairing, interpersonal dynamics of long-standing groups can make a mockery of the responsibilities resting on the shoulders of charter. But it's unusual for anyone on the outside to get any the wiser. Who has time to decipher meeting minutes or decode in-group shorthands?

And so it is precisely because fifth columnists can hire old timers8 that they are able to pivot groups away from addressing pressing concerns to the majority of the ecosystem, particularly in the absence of functional incubation venues challenging sclerotic groups to move faster.

One useful lens for discussing the fifth column problem is the now-common Political Science analysis of systems through "veto points" or "veto players" (Tsebelis, '95; PDF):

Policy stability is different from both government stability and regime stability. In fact ... they are inversely related: policy stability causes government or regime instability. This analysis is based on the concept of the veto player in different institutional settings.

If we substitute "capability stability" for "policy stability" and "platform relevance" for "government/regime stability," the situation becomes clear:

Capability stability is different from platform relevance. In fact ... they are inversely related: capability stability causes platform irrelevance.

Any platform that cannot grow to incorporate new capabilities, or change to address pressing problems, eventually suffers irrelevance, then collapse. And the availability of veto points to players entirely hostile to the success of the platform is, therefore, an existential risk9 — both to the platform and to the SDOs that standardise it, not to mention the careers of developers invested in the platform.

This isn't hard to understand, so how does the enterprising fifth columnist cover their tracks? By claiming that they are not opposed to proposals, but that they "need work," without either offering to do that work, develop counter-proposals, or make any commitment to ship any version of proposals in their own products. This works too often because a pattern of practice must develop before participants can see that blockage is not a one-off rant by a passionate engineer.

Participants are given wide births, both because of the presumption of voluntarity implementations,10 and the social attitudes of standards-inclined developers. Most SDO participants are community-minded and collaboration-oriented. It's troubling to imagine that someone would show up to such a gathering without an intent to work to solve problems, as that would amount to bad faith. But it has recurred frequently enough that we must accept it does happen. Having accepted its possibility, we must learn to spot the signs, remain on guard, and call it out as evidence accumulates.

The meta-goal is to ensure no action for developers, with delay to the point of irrelevance as a fallback position, so it is essential to the veto wielder that this delay be viewed as desirable in some other dimension. If this sounds similar to "but neighbourhood character!" arguments by NIMBYs offer, that's no accident. Without a valid argument to forestall efforts to solve pressing problems, the fifth column must appeal to latent, second-order values that are generally accepted by the assembled to pre-empt the first-order concern. This works a shocking fraction of the time.

It works all the better in committees with a strong internal identity. It's much easier to claim that external developers demanding solutions "just don't get it" when the group already views their role as self-styled bulwarks against bad ideas.

The final, most pernicious, building block of Working Group decay is the introduction of easy vetoes, most often via consensus decision-making. When vetoes are available to anyone in a large group at many points, the set of proposals that can be offered without a presumption of failure shrinks to a tiny set, in line with the findings of Tsebelis.

This is not a new problem in standards development, but the language gets muddy, so for consistency's sake we'll define two versions:

  • Strong Consensus refers to working modes in which the assent of every participant is affirmatively required to move proposals forward.
  • Weak Consensus are modes in which preferences are polled, but "the sense of the room" can carry a proposal forward over even strenuous objections by small minorities.

Every long-term functional SDO operates by some version of Weak Consensus. The IETF bandies this about so often that the phrase "rough consensus and running code" is synonymous with the organisation.

But not every group within these SDOs are chaired by folks willing to overrule objectors. In these situations, groups can revert to de facto strong consensus, which greatly multiplies the number of veto holders. Variations on this theme can be even less disciplined, with only an old guard having effective veto power, whilst newer participants may be more easily overturned.

Strong consensus is the camel's nose for long-term gridlock. Like unmoderated mailing lists, things can spiral without anyone quite knowing where the error was made. Small groups can start under strong consensus out of a sense of mutual respect, only to find it is nearly impossible to revoke a veto power once handed out. A sense of fair play may cause this right to be extended to each new participant, and as groups grow, affiliations change, and interests naturally diverge, it may belatedly dawn on those interested in progress that the very rooms where they once had so much luck driving things forward have become utterly dysfunctional. And under identical rules!

Having found the group no longer functions, delegates who have invested large portions of their careers to these spaces have a choice: they can acknowledge that it is not working and demand change, becoming incredibly unpopular amongst their closest peers in the process. Or they can keep their heads down and hope for the best, defending the honour of the group against attacks by "outsiders". Don't they know whose these people are?

Once it sets in, strong consensus modes are devilish to unpick, often requiring a changing of the guard, both among group chairs and influential veto-wielders. Groups can lose internal cohesion and technical expertise in the process, heaping disincentive to rock even the most unproductive boats.

The ways that web ecosystem SDOs and their participants can guard against embrittlement and fracture from the leeching effects of fifth columns are straightforward, if difficult to pull off socially:

  • Seek out and remove strong consensus processes.

    The timeless wisdom of weak consensus is generally prescribed by process documents governing SDOs, so the usual challenge is enforcement. The difficulty in shaking strong consensus practices is frequently compounded by the status of influential individuals from important working groups who prefer it. Regardless, the consequences of allowing strong consensus to fester in rooms big enough to justify chairing is dire, and it must be eliminated root and branch.

  • Aggressively encourage "counterproposal or GTFO" culture.

    Fifth columnists thrive in creating ambiguity for the prospects of meaningful proposals while paying no cost for "just asking questions." This should be actively discouraged, particularly among implementers, within the social compact of web SDOs. The price for imposing delay must be higher than having vague "concerns".

  • Require Working Groups to list incubators they accept proposals from. Require they prove it.

    Many groups that fifth columnists exploit demonstrate a relative imperviousness to new ideas through a combination of social norms and studious ignorance. To break this pattern, SDOs should require all re-charters include clear evidence of proposals coming from outside the group itself. Without such collateral, charters should be at risk.

  • Defend incubators from process attacks.

    Far from being sideshows, incubation venues are the lifeblood of vibrant SDOs. They must be encouraged, nurtured, and highlighted to the membership as essential to the success of the ecosystem and the organisation.

    In the same vein, process shenanigans to destabilise successful incubators must be fended off; including but not limited to making them harder to join or create, efforts to deny their work products a seat in working group chartering, or tactics that make their internal operations more veto-centric.

It takes a long time, but like the gravitational effects of a wandering planet out in the OORT cloud, the informational content of the fifth columnist's agenda eventually becomes legible by side effect. Because an anti-web agenda is easy to pass off under other cover, it requires a great number of observations to understand that this part of the committee does not want the platform to evolve. From that point forward, it becomes easier to understand the information being communicated as noise, rather than signal.

Once demonstrated, we must route around the damage, raising the cost of efforts to undermine the single most successful standards-based ecosystem of our lifetimes; one that I believe is worth defending from insider threats as well as external attack.


  1. The most substantial periods of institutional decrepitude in web standards are highly correlated with veto players (vendors with more than ~10% total share) walking away from efforts to push the web forward.

    The most famous period of SDO decay is probably the W3C's troubled period after Microsoft disbanded the IE team after IE 6.0's triumphant release in 2001. Even if folks from Microsoft continued to go to meetings, there was nobody left to implement new or different designs and no product to launch them in.

    Standards debate went from pitched battles over essential features of systems being actively developed to creative writing contests about futures it might be nice to have. Without the disciplining function of vendors shipping, working groups just become expensive and drab pantomimes.

    With Microsoft circa 2002 casting the IE team to the wind and pivoting hard to XAML and proprietary, Windows-centric technologies, along with the collapse of Netscape, the W3C was left rudderless, allowing it to drift into failed XHTML escapades that inspired revulsion among the remaining staffed engine projects.

    This all came to a head over proposed future directions at 2004's Web Applications and Compound Document Workshop. WHATWG was founded in the explosion's crater, and the rest is (contested) history.

    The seeds of the next failure epoch were planted at the launch of the iOS App Store in 2008, where it first became clear that other "browsers" would be allowed on Cupertino's best-selling devices, but not if they included their own engines. Unlike the big-bang of Microsoft walking away from browsers for 3+ years, Apple's undermining of the W3C, IETF, and ECMA become visible only gradually as the total global market share of mobile devices accelerated. Apple also "lost" its early lead in the smartphone market share as Android ate up the low end's explosive growth. The result was a two-track mobile universe, where Apple retained nearly all influence and profits, whilst most new smartphone users encountered the predations of Samsung, HTC, LG, Xiaomi, and a hundred other cut-price brands.

    Apple's internal debates about which platform for iOS was going to "win" may have been unsettled at the launch of the App Store12, but shortly thereafter the fate of Safari and the web on iOS was sealed when Push Notifications appeared for native apps but not web apps.

    Cupertino leveraged its monopoly on influence to destroy the web's chances, while Mozilla, Google, and others who should have spoken up remained silent. Whether that cowardice was borne of fear, hope, or ignorance hardly matters now. The price of silence is now plain, and the web so weakened that it may succumb entirely to the next threat; after all, it has no champions among the megacorps that have built their businesses on its back.

    First among equals, Apple remains at the vanguard of efforts to suppress the web, spending vast sums to mislead web developers, regulators, legislators, and civil society. That last group uncomfortably includes SDOs, and it's horrifying to see the gaslighting plan work while, in parallel, Cupertino sues for delay and offers easily disproven nonsense in rooms where knowing misrepresentation should carry sanction.

    All this to preclude a competent version of the web on iPhones, either from Apple or (horrors!) from anyone else. Query why.

  2. The market share at which any browser obtains "blocking share" is not well theorized, but is demonstrably below 5% for previously dominant players, and perhaps higher for browsers or brands that never achieved market plurality status.

    Browsers and engines which never gain share above about 10% are not considered "relevant" by most developers and can be born, live, and die entirely out of view of the mainstream. For other players, particularly across form-factors, the salience of any specific engine is more contextual. Contractual terms, tooling support, and even the personal experience of influential developers all play a role. This situation is not helped by major sites and CDNs — with the partial exception of Cloudflare — declining to share statistics on the mix of browsers their services see.

    Regardless, web-wide market share below 2% for any specific version of any engine is generally accepted as irrelevance; the point at which developers no longer put in even minimal effort to continue to support a browser except with "fallback" experiences.

  3. It's not an exaggeration to suggest that the W3C, IETF, and ECMA have been fundamentally undermined by Apple's coercion regarding browser engines on iOS, turning the entire organisation into a sort of Potempkin village with semi-independent burgs taking shape on the outskirts through Community Groups like the WICG,, which Apple regularly tries to tear down through procedural attacks it hopes the wider community will not trace back to the source.

    When competitors cannot ship their best ideas, the venues where voluntary standards are codified lose both their role as patent-pooling accelerators for adoption, as well as their techno-social role as mediators and neutral ground.

    The corporeal form continues long after the ghost leaves the body, but once the vivvifying force of feature autonomy is removed, an SDO's roof only serves to collect skeletons, eventually compromising the origanisation itself. On these grounds, self-aware versions of the W3C, IETF, and ECMA would have long ago ejected Apple from membership, but self-awareness is not their strong suit. And as long as the meetings continue and new drafts are published, it hardly deserves mention that the SDO's role in facilitating truly disruptive change will never again be roused. After all, the membership documents companies sign do not require them to refrain from shivving their competition; only that everyone keep their voices down and the tone civil.

    What's truly sad is how few convening services or reading the liturgy from the pews seem disturbed that their prayers can never again be heard.

  4. This is about the point where folks will come crawling out of the walls to tell stories about IBM or Rambus or Oracle or any of the codec sharks that have played the heel in standards at one point or another. Don't bother; I've got a pretty full file of those stories, and I can't repeat them here anyway. But if you do manage to blog one of them in an entertaining way without getting sued, please drop a line.

  5. You know, in case you're wondering what CSS WG was doing from '04-'21. I wonder what changed?

  6. It's particularly disingenuous for fifth columnists to claim proposals they don't like are "not standards track" as they know full-well that the reason they aren't being advanced within chartered working groups is their own opposition.

    The circularity is gobsmacking, but works often enough to reduce pressure on the fifth columnist by credulous web developers that it gets trotted out with regularity. Sadly, this is only possible because other vendors fail to call bullshit. Apple, e.g., would not be getting away with snuffing out the mobile web's chances were it not for a cozy set of fellow travellers at Mozilla and Google.

  7. Most Web API design processes cannot claim any kinship to the scientific method, although we have tried mightily to open a larger space for testing of alternative hypotheses within the Chromium project over the past decade.

    Even so, much of the design work of APIs on the web platform is shaped by the specific and peculiar preferences of powerful individuals, many of whom are not and have never been working web developers.

  8. Hiring "known quantities" to do the wrangling within a Working Group you want to scupper is generally cheaper than doing constructive design work, so putting in-group old-timers on the payroll is a reliable way for fifth columnists to appear aligned with the goals of the majority while working against them in practice.

  9. One rhetorical mode for those working to constrain the web platform's capabilities is to attempt to conflate any additions with instability, and specifically, the threat that sites that work today will stop working tomorrow. This is misdirection, as stability for the ecosystem is not a function of standards debates, but rather the self-interested actions of each vendor in the market.

    When true browser competition is allowed, the largest disciplining force on vendor behaviour is incompatibility. Browsers that fail to load important web pages lose share to those that have better web compatibility. This is as close as you can get to an iron law of browser engineering, and every vendor knows that their own engine teams have spent gargantuan amounts of time and money to increase compatibility over the years.

    Put more succinctly, backwards compatibility on the web is not seriously at risk from capability expansions. Proposals that would imperil back compat11 are viewed as non-starters in all web standards venues, and major schisms have formed over proposed, incompatible divergence, with the compatibility-minded winning nearly every skirmish.

    No SDO representatitive from these teams is ignorant of these facts, and so attempts to argue against solving important problems by invoking the spectre of "too much change, too fast" or "breaking the web" are sleights of hand.

    They know that most web developers value stability and don't understand these background facts, creating space for a shell game in which the threat of too much change serves to obsure their own attempts at sabatoge through inaction. Because web standards are voluntary and market share matters tremendously to every vendor, nothing that actually breaks the web will be allowed to ship. So armed, you can now call out this bait-and-switch wherever it appears. Doing so is important, as the power to muddy these waters stems from the relative ignorance of web developers. Educating them to the real power dynamics at work is our best bullwark against the fifth column.

  10. There's no surer sign of the blindness many SDO participants exhibit toward the breakage of the voluntary implementation regime than that they extend deference on that basis to Apple.

    Fruit Co. engineers engaging in SDOs do not bear a heightened presumption to share when they will implment if others lead. To the contrary, they have so thoroughly lowered expectations that nobody expects even timely feedback on proposals, let alone a commitment to parity. Cupertino sullying the brands they force to carry WebKit's worst-of-the-worst implementation is simply accepted as the status quo.

    This is entirely backwards, and Apple's representatitives should, instead, be expected to provide implementation timlines for features shipped by other vendors. God knows they can afford it. Until such time as Fruit Co. relents and allows true global engine competition, it's the only expectation that is fair, and every Apple employee should feel the heat of shame every time they have to mutter "Apple does not comment on future product releases" while unjustly perpetuating a system that harms the entire web.

  11. The browser and web infrastructure community have implemented large transitions away from some regretted technologies, but the care and discipline needed to do this without breaking the web is the stuff of legend. Big ones that others should write long histories of are the sunsetting of AppCache and the move away from unencrypted network connections.

    Both played out on the order of half a decade or more, took dozens of stakeholders to pull off, and were games of inches adding up to miles. New tools like The Reporting API, Deprecation Reports, and Reverse Origin Trials had to be invented to augment the "usual" tool bag of anonymised analytics trawls, developer outreach, new limits on unwanted behaviour, and nudging UI.

    In both cases (among many more small deprecations we have done over the years), the care taken ensured that only a small fraction of the ecosystem was impacted at any moment, lowering the temperature and allowing for an orderly transition to better technology.

  12. Your correspondent has heard different stories from folks who had reason to know about the period from '08-'10 when Apple pulled its foot off the gas with Safari.

    Given the extreme compartmentalisation of Apple's teams, the strategic import of any decision, and the usual opacity of tech firms around funding levels ("headcount") to even relatively senior managers, this is both frustrating and expected.

    The earliest dating puts the death of the web as "Plan A" before Steve Jobs's announcement of the iPhone at Macworld in June 2007. The evidence offered for this view was that a bake off for system apps and the home screen launcher had already been lost by WebKit. Others suggest it wasn't until the success of the App Store in '09 and '10 that Apple began to pull away from the web as a top-tier platform. Either way, it was all over by early 2011 at the very latest.

    WebKit would never again be asked to compete as a primary mobile app platform, and skeletal funding for Safari ensured it would never be allowed to break out of the OS's strategic straightjacket the way IE 4-6 had.

Links? Links!

Frances has urged me for years to collect resources for folks getting into performance and platform-oriented web development. The effort has always seemed daunting, but the lack of such a list came up again at work, prompting me to take on the side-quest amidst a different performance yak-shave. If that sounds like procrastination, well, you might very well think that. I couldn't possibly comment.

The result is a new links and resources page page which you can find over in the navigation rail. It's part list-of-things-I-keep-sending-people, part background reading, and part blogroll.

The blogroll section also prompted me to create an OPML export , which you can download or send directly to your feed reader of choice.

The page now contains more than 250 pointers to people and work that I view as important to a culture that is intentional about building a web worth wanting. Hopefully maintenance won't be onerous from here on in. The process of tracking down links to blogs and feeds is a slog, no matter how good the tooling. Very often, this involved heading to people's sites and reading the view-source://

Having done this dozens of times on the sites of brilliant and talented web developers in a short period of time, a few things stood out.

First — and I cannot emphasise this enough — holy cow.

The things creative folks can do today with CSS, HTML, and SVG in good browsers is astonishing. If you want to be inspired about what's possible without dragging bloated legacy frameworks along, the work of Ana Tudor, Jhey, Julia Miocene, Bramus, Adam, and so many others can't help but raise your spirits. The CodePen community, in particular, is incredible, and I could (and have) spend hours just clicking through and dissecting clever uses of the platform from the site's "best of" links.

Second, 11ty and Astro have won the hearts of frontend's best minds.

It's not universal, but the overwhelming bulk of personal pages by the most talented frontenders are now built with SSGs that put them in total control. React, Next, and even Nuxt are absent from pages of the folks who really know what they're doing. This ought to be a strong signal to hiring managers looking to cut through the noise.

Next, when did RSS/Atom previews get so dang beautiful?

The art and effort put into XSLT styling like Elly Loel's is gobsmacking. I am verklempt that not only does my feed not look that good, my site doesn't look that polished.

Last, whimsy isn't dead.

Webrings, guestbooks, ASCII art in comments, and every other fun and silly flourish are out there, going strong, just below the surface of the JavaScript-Industrial Complex's thinkfluencer hype recycling.

And it's wonderful.

My overwhelming feeling after composing this collection is gratitude. So many wonderful people are doing great things, based on values that put users first. Sitting with their work gives me hope, and I hope their inspiration can spark something similar for you.

Conferences, Clarity, and Smokescreens

Before saying anything else, I'd like to thank the organisers of JSNation for inviting to speak in Amsterdam. I particularly appreciate the folks who were brave enough to disagree at the Q&A sessions afterwards. Engaged debate about problems we can see and evidence we can measure makes our work better.

The conference venue was lovely, and speakers were more than well looked after. Many of the JSNation talks were of exactly the sort I'd hope to see as our discipline belatedly confronts a lost decade, particularly Jeremias Menichelli's lighting talk. It masterfully outlined how many of the hacks we have become accustomed to are no longer needed, even in the worst contemporary engines. view-source:// on the demo site he made to see what I mean.

Vinicius Dallacqua's talk on LoAF was on-point, and the full JSNation line-up included knowledgeable and wise folks, including Jo, Charlie, Thomas, Barry, Nico, and Eva. There was also a strong set of accessibility talks from presenters I'm less familiar with, but whose topics were timely and went deeper than the surface. They even let me present a spicier topic than I think they might have been initially comfortable with.

All-in-all, JSNation was a lovely time, in good company, with a strong bent toward doing a great job for users. Recommended.

Day 21React Summit 2025 — could not have been more different. While I was in a parallel framework authors meeting for much of the morning,2 I did attend talks in the afternoon, studied the schedule, and went back through many more after the fact on the stream. Aside from Xuan Huang's talk on Lynx and Luca Mezzalira's talk on architecture, there was little in the program that challenged frameworkist dogma, and much that played to it.

This matters because conferences succeed by foregrounding the hot topics within a community. Agendas are curated to reflect the tides of debate in the zeitgeist, and can be read as a map of the narratives a community's leaders wish to see debated. My day-to-day consulting work, along with high-visibility industry data, shows that the React community is mired in a deep, measurable quality crisis. But attendees of React Summit who didn't already know wouldn't hear about it.

Near as I can tell, the schedule of React Summit mirrors the content of other recent and pending React conferences (1, 2, 3, 4, 5, 6) in that these are not engineering conferences; they are marketing events.

How can we tell the difference? The short answer is also a question: "who are we building for?"

The longer form requires distinguishing between occupations and professions.

In a 1912 commencement address, the great American jurist and antitrust reformer Louis Brandeis hoped that a different occupation — business management — would aspire to service:

The peculiar characteristics of a profession as distinguished from other occupations, I take to be these:

First. A profession is an occupation for which the necessary preliminary training is intellectual in character, involving knowledge and to some extent learning, as distinguished from mere skill.

Second. It is an occupation which is pursued largely for others and not merely for one's self.

Third. It is an occupation in which the amount of financial return is not the accepted measure of success.

In the same talk, Brandeis named engineering a discipline already worthy of a professional distinction. Most software development can't share the benefit of the doubt, no matter how often "engineer" appears on CVs and business cards. If React Summit and co. are anything to go by, frontend is mired in the same ethical tar pit that cause Wharton, Kellogg, and Stanford grads to reliably experience midlife crises.3

It may seem slanderous to compare React conference agendas to MBA curricula, but if anything it's letting the lemon vendors off too easily. Conferences crystallise consensus about which problems matter, and React Summit succeeded in projecting a clear perspective — namely that it's time to party like it's 2013.

A patient waking from a decade-long coma would find the themes instantly legible. In no particular order: React is good because it is popular. There is no other way to evaluate framework choice, and that it's daft to try because "everyone knows React".4 Investments in React are simultaneously solid long-term choices, but also fragile machines in need of constant maintenance lest they wash away under the annual tax of breaking changes, toolchain instability, and changing solutions to problems React itself introduces. Form validation is not a solved problem, and in our glorious future, the transpilers compilers will save us.

Above all else, the consensus remains that SPAs are unquestionably a good idea, and that React makes sense because you need complex data and state management abstractions to make transitions between app sections seem fluid in an SPA. And if you're worried about the serially terrible performance of React on mobile, don't worry; for the low, low price of capitulating to App Store gatekeepers, React Native has you covered.5

At no point would our theoretical patient risk learning that rephrasing everything in JSX is now optional thanks to React 19 finally unblocking interoperability via Web Components.6 Nor would they become aware that new platform APIs like cross-document View Transitions and the Navigation API invalidate foundational premises of the architectures that React itself is justified on. They wouldn't even learn that React hasn't solved the one problem it was pitched to address.

Conspicuously missing from the various "State Of" talks was discussion of the pressing and pervasive UX quality issues that are rampant in the React ecosystem.

Per the 2024 Web Almanac, less than half of sites earn passing grades on mobile, where most users are.
Per the 2024 Web Almanac, less than half of sites earn passing grades on mobile, where most users are.

We don't need to get distracted looking inside these results. Treating them as black boxes is enough. And at that level we can see that, in aggregate, JS-centric stacks aren't positively correlated with delivering good user-experiences.

2024's switch from FID to INP caused React (particularly Next and Gatsby) sites which already had low pass-rates to drop more than sites constructed on many other stacks.
2024's switch from FID to INP caused React (particularly Next and Gatsby) sites which already had low pass-rates to drop more than sites constructed on many other stacks.

This implies that organisations adopting React do not contain the requisite variety needed to manage the new complexity that comes from React ecosystem tools, practices, and community habits. Whatever the source, it is clearly a package deal. The result are systems that are out of control and behave in dynamically unstable ways relative to business goals.

The evidence that React-based stacks frequently fail to deliver good experiences is everywhere. Weren't "fluid user experiences" the point of the JS/SPA/React boondoggle?7

We have witnessed high-cost, low-quality JS-stack rewrites of otherwise functional HTML-first sites ambush businesses with reduced revenue and higher costs for a decade. It is no less of a scandal for how pervasive it has become.

But good luck finding solutions to, or even acknowledgement of, that scandal on React conference agendas. The reality is that the more React spreads, the worse the results get despite the eye-watering sums spent on conversions away from functional "legacy" HTML-first approaches. Many at React Summit were happy to make these points to me in private, but not on the main stage. The JS-industrial-complex omertà is intense.

No speaker I heard connected the dots between this crisis and the moves of the React team in response to the emergence of comparative quality metrics. React Fiber (née "Concurrent"), React Server Components, the switch away from Create React App, and the React Compiler were discussed as logical next steps, rather than what they are: attempts to stay one step ahead of the law. Everyone in the room was implicitly expected to use their employer's money to adopt all of these technologies, rather than reflect on why all of this has been uniquely necessary in the land of the Over Reactors.8

The treadmill is real, but even at this late date, developers are expected to take promises of quality and productivity at face value, even as they wade through another swamp of configuration cruft, bugs, and upgrade toil.

React cannot fail, it can only be failed.

And then there was the privilege bubble. Speaker after speaker stressed development speed, including the ability to ship to mobile and desktop from the same React code. The implications for complexity-management, user-experience, and access were less of a focus.

The most egregious example of the day came from Evan Bacon in his talk about Expo, in which he presented Burger King's website as an example of a brand successfully shipping simultaneously to web and native from the same codebase. Here it is under WebPageTest.org's desktop setup:9


As you might expect, putting 75% of the 3.5MB JS payload (15MB unzipped) in the critical path does unpleasant things to the user experience, but none of the dizzying array of tools involved in constructing bk.com steered this team away from failure.10

The fact that Expo enables Burger King to ship a native app from the same codebase seems not to have prevented the overwhelming majority of users from visiting the site in browsers on their mobile devices, where weaker mobile CPUs struggle mightily:


The CrUX data is damning:


This sort of omnishambles is what folks mean when they say that "JavaScript broke the web and called it progress".

Is waiting 30 seconds for a loading spinner bad?<br>Asking for an industry.
Is waiting 30 seconds for a loading spinner bad?
Asking for an industry.

The other poster child for Expo is Bluesky, a site that also serves web and React Native from the same codebase. It's so bewilderingly laden with React-ish excesses that their engineering choices qualify as gifts-in-kind to Elon Musk and Mark Zuckerberg:


Why is Bluesky so slow? A huge, steaming pile of critical-path JS, same as Burger King:


Again, we don't need to look deeply into the black box to understand that there's something rotten about the compromises involved in choosing React Native + Expo + React Web. This combination clearly prevents teams from effectively managing performance or even adding offline resilience via Service Workers. Pinafore and Elk manage to get both right, providing great PWA experiences while being built on a comparative shoestring. It's possible to build a great social SPA experience, but maybe just not with React:

If we're going to get out of this mess, we need to stop conflating failure with success. The <em>entire point</em> of this tech was to deliver better user experiences, and so the essential job of management is to ask: does it?
If we're going to get out of this mess, we need to stop conflating failure with success. The entire point of this tech was to deliver better user experiences, and so the essential job of management is to ask: does it?

The unflattering comparisons are everywhere when you start looking. Tanner Linsley's talk on TanStack (not yet online) was, in essence, a victory lap. It promised high quality web software and better time-to-market, leaning on popularity contest results and unverifiable, untested claims about productivity to pitch the assembled. To say that this mode of argument is methodologically unsound is an understatement. Rejecting it is necessary if we're going to do engineering rather that marketing.

Popularity is not an accepted unit of engineering quality measurement.
Popularity is not an accepted unit of engineering quality measurement.

The TanStack website cites this social proof as an argument for why their software is great, but the proof of the pudding is in the eating:


The contrast grows stark as we push further outside the privilege bubble. Here are the same sites, using the same network configuration as before, but with the CPU emulation modelling a cheap Android instead:

An absolute rout. The main difference? The amount of JS each site sends, which is a direct reflection of values and philosophy.
An absolute rout. The main difference? The amount of JS each site sends, which is a direct reflection of values and philosophy.
Site Wire JS Decoded JS TBT (ms)
astro.build 11.1 kB 28.9 kB 23
hotwired.dev 1.8 kB 3.6 kB 0
11ty.dev 13.1 kB 42.2 kB 0
expo.dev 1,526.1 kB 5,037.6 kB 578
tanstack.com 1,143.8 kB 3,754.8 kB 366

Yes, these websites target developers on fast machines. So what? The choices they make speak to the values of their creators. And those values shine through the fog of marketing when we use objective quality measures. The same sorts of engineers who care to shave a few bytes of JS for users on fast machines will care about the lived UX quality of their approach all the way down the wealth curve. The opposite also holds.

It is my long experience that cultures that claim "it's fine" to pay for a lot of JS up-front to gain (unquantified) benefits in another dimension almost never check to see if either side of the trade comes up good.

Programming-as-pop-culture is oppositional to the rigour required of engineering. When the folks talking loudest about "scale" and "high quality" and "delivery speed" (without metrics or measurement) continually plop out crappy experiences, but are given huge megaphones anyway, we need to collectively recalibrate.

There were some bright spots at React Summit, though. A few brave souls tried to sneak perspective in through the side door, and I applaud their efforts:

If the Q&A sessions after my talk are any indication, Luca faced serious risk of being ignored as a heretic for putting this on a slide.
If the Q&A sessions after my talk are any indication, Luca faced serious risk of being ignored as a heretic for putting this on a slide.

If frontend aspires to be a profession11something we do for others, not just ourselves — then we need a culture that can learn to use statistical methods for measuring quality and reject the sorts of marketing that still dominates the React discourse.

And if that means we have to jettison React along the way, so be it.


  1. For attendees, JSNation and React Summit were separate events, although one could buy passes that provided access to both. My impression is that many did. As they were in the same venue, this may have simplified some logistics for the organisers, and it was a good way to structure content for adjacent, but not strictly overlapping, communities of interest.

  2. Again, my thanks to the organisers for letting me sit in on this meeting. As with much of my work, my goal was to learn about what's top of mind to the folks solving problems for developers in order to prioritise work on the Web Platform.

    Without giving away confidences from a closed-door meeting, I'll just say that it was refreshing to hear framework authors tell us that they need better HTML elements and that JSX's default implementations are scaling exactly as poorly ecosystem-wide as theory and my own experience suggest. This is down to React's devil-may-care attitude to memory.

    It's not unusual to see heavy GC stalls on the client as a result of Facebook's always-wrong assertion that browsers are magical and that CPU costs don't matter. But memory is a tricksy issue, and it it's a limiting factor on the server too.

    Lots to chew on from those hours, and I thank the folks who participated for their candor, which was likely made easier since nobody from the React team deigned to join.

  3. Or worse, don't.

    Luckily, some who experience the tug of conscience punch out and write about it. Any post-McKinsey tell-all will do, but Anand Giridharadas is good value for money in this genre.

  4. Circular logic is a constant in discussions with frameworkists. A few classics of the genre that got dusted off in conversations over the conference:

    • "The framework makes us more productive."

      Oh? And what's the objective evidence for that productivity gain?

      Surely, if it's large as frameworkists claim, economists would have noted the effects in aggregate statistics. But we have not seen that. Indeed, there's no credible evidence that we are seeing anything more than the bog-stock gains from learning in any technical feild, except the combinatorial complexity of JS frameworks may, in itself, reduce those gains.

      But nobody's running real studies that compare proficient HTML + CSS (or even jQuery) developers to React developers under objective criteria, and so personal progression is frequently cited as evidence for collective gains, which is obviously nonsensical. It's just gossip.
    • "But we can hire for the framework."

      😮 sigh 😮‍💨
    • "The problem isn't React, it's the developers."

      Hearing this self-accusation offered at a React conference was truly surreal. In a room free of discussions about real engineering constraints, victim-blaming casts a shadow of next-level cluelessness. But frameworkists soldier on, no matter the damage it does to their argument. Volume and reptition seem key to pressing this line with a straight face.
  5. A frequently elided consequence of regulators scrutinising Apple's shocking "oversight" of its app store has been that Apple has relaxed restrictions around PWAs on iOS that were previously enforced often enough in the breach to warn businesses way. But that's over now. To reach app stores on Windows, iOS, and Android, you need is a cromulent website and PWABuilder.

    For most developers, the entire raison d'être for React Native is now kaput; entirely overcome by events. Not that you'd hear about it at an assemblage of Over Reactors.

  6. Instead of describing React's exclusive ownership of subtrees of the DOM, along with the introduction of a proprietary, brittle, and hard-to-integrate parallel lifecycle as a totalising framework that demands bespoke integration effort, the marketing term "composability" was substituted to describe the feeling of giving everything over to JSX-flavoured angle brackets every time a utility is needed.

  7. It has been nearly a decade since the failure of React to reliably deliver better user experiences gave rise to the "Developer Experience" bait-and-switch.

  8. Mark Erikson's talk was ground-zero for this sort of obfuscation. At the time of writing, the recording isn't up yet, but I'll update this post with analysis when it is. I don't want to heavily critique from my fallible memory.

  9. WPT continues to default desktop tests to a configuration that throttles to 5Mbps up, 1Mbps down, with 28ms of RTT latency added to each packet. All tests in this post use a somewhat faster configuration (9Mbps up and down) but with 170ms RTT to better emulate usage from marginal network locations and the effects of full pipes.

  10. I read the bundles so you don't have to.

    So what's in the main, 2.7MB (12.5MB unzipped) bk.com bundle? What follows is a stream-of-consciousness rundown as I read the pretty-printed text top-to-bottom. At the time of writing, it appears to include:

    • A sea of JS objects allocated by the output of a truly cursed "CSS-in-JS" system. As a reminder, "CSS-in-JS" systems with so-called "runtimes" are the slowest possible way to provide styling to web UI. An ominous start.
    • React Native Reanimated (no, I'm not linking to this garbage), which generates rAF-based animations on the web in The Year of Our Lord 2025, a full five years after Safari finally dragged its ass into the 2010s and implemented the Web Animation API. As a result, Renaimated is Jank City. Jank Town. George Clinton and the Parliment Jankidellic. DJ Janky Jeff. Janky Jank and the Janky Bunch. Ole Jankypants. The Undefeated Heavyweight Champion of Jank. You get the idea; it drops frames.
    • Redefinitions of the built-in CSS colour names, because at no point traversing the towering inferno of build tools was it possible to know that this web-targeted artefact would be deployed to, you know, browsers.
    • But this makes some sense, because the build includes React Native Web, which is exactly what it sounds like: a set of React components that emulate the web to provide a build target for React Native's re-creation of a subset of the layout that browsers are natively capable of, which really tells you everything you need to know about how teams get into this sort of mess.
    • Huge amounts of code duplication via inline strings that include the text of functions right next to the functions themselves. Yes, you're reading that right: some part of this toolchain is doubling up the code in the bundle, presumably for the benefit of a native debugger. Bro, do you even sourcemap? At this point it feels like I'm repeating myself, but none of this is necessary on the web, and none of the (many, many) compiler passes saw fit to eliminate this waste in a web-targeted build artefact.
    • Another redefinition of the built-in CSS colour names and values. In browsers that support them natively. I feel like I'm taking crazy pills.
    • A full copy of React, which is almost 10x larger than it needs to be in order to support legacy browsers and React Native.
    • Tens Hundreds of thousands of lines of auto-generated schema validation structures and repeated, useless getter functions for data that will never be validated on the client. How did this ungodly cruft get into the bundle? One guess, and it rhymes with "schmopallo".
    • Of course, no bundle this disastrous would be complete without multiple copies of polyfills for widely supported JS features like Object.assign(), class private fields, generators, spread, async iterators, and much more.
    • Inline'd WASM code, appearing as a gigantic JS array. No, this is not a joke.
    • A copy of Lottie. Obviously.
    • What looks to be the entire AWS Amplify SDK. So much for tree-shaking.
    • A userland implementation of elliptic curve cryptography primitives that are natively supported in every modern browser via Web Crypto.
    • Inline'd SVGs, but not as strings. No, that would be too efficient. They're inlined as React components.
    • A copy of the app's Web App Manifest, inline, as a string. You cannot make this up.

    Given all of this high-cost, low-quality output, it might not surprise you to learn that the browser's coverage tool reports that more than 75% of functions are totally unused after loading and clicking around a bit.

  11. I'll be the first to point out that what Brandeis is appealing to is distinct from credentialing. As a state-school dropout, that difference matters to me very personally, and it has not been edifying to see credentialism (in the form of dubious bootcamps) erode both the content and form of learning in "tech" over the past few years.

    That doesn't mean I have all the answers. I'm still uncomfortable with the term "professional" for all the connotations it now carries. But I do believe we should all aspire to do our work in a way that is compatible with Brandeis' description of a profession. To do otherwise is to endanger any hope of self-respect and even our social licence to operate.

Safari at WWDC '25: The Ghost of Christmas Past

At Apple's annual developer marketing conference, the Safari team announced a sizeable set of features that will be available in a few months. Substantially all of them are already shipped in leading-edge browsers. Here's the list, prefixed by the year that these features shipped to stable in Chromium:

In many cases, these features were available to developers even earlier via the Origin Trials mechanism. WebGPU, e.g., ran trials for a year, allowing developers to try the in-development feature on live sites in Chrome and Edge as early as September 2021.

There are features that Apple appears to be leading on in this release, but it's not clear that they will become available in Safari before Chromium-based browsers launch them, given that the announcement is about a beta:

The announced support for CSS image crossorigin() and referrerpolicy() modifiers has an unclear relationship to other browsers, judging by the wpt.fyi tests.

On balance, this is a lot of catch-up with sparse sprinklings of leadership. This makes sense, because Safari is in usually in last place when it comes to feature completeness:

A graph of features missing from only one engine. Over the past decade, Safari and WebKit have consistently brought up the caboose.
A graph of features missing from only one engine. Over the past decade, Safari and WebKit have consistently brought up the caboose.

And that is important because Apple's incredibly shoddy work impacts every single browser on iOS.

You might recall that Apple was required by the EC to enable browser engine choice for EU citizens under the Digital Markets Act. Cupertino, per usual, was extremely chill about it, threatening to end PWAs entirely and offering APIs that are inadequate or broken.

And those are just the technical obstacles that Apple has put up. The proposed contractual terms (pdf) are so obviously onerous that no browser vendor could ever accept them, and are transparently disallowed under the DMA's plain language. But respecting the plain language of the law isn't Apple's bag.

All of this is to say that Apple is not going to allow better browsers on iOS without a fight, and it remains dramatically behind the best engines in performance, security, and features. Meanwhile, we now know that Apple is likely skimming something like $19BN per year in pure profit from it's $20+BN/yr of revenue from its deal with Google. That's a 90+% profit rate, which is only reduced by the paltry amount it re-invests into WebKit and Safari.

So to recap: Apple's Developer Relations folks want you to be grateful to Cupertino for unlocking access to features that Apple has been the singular obstacle to.

And they want to you ignore the fact that for the past decade it has hobbled the web while skimming obscene profits from the ecosystem.

Don't fall for it. Ignore the gaslighting. Apple could 10x the size of the WebKit team without causing the CTO to break a sweat, and there are plenty of great browser engineers on the market today. Suppressing the web is a choice — Apple's choice — and not one that we need to feel gratitude toward.

If Not React, Then What?

Over the past decade, my work has centred on partnering with teams to build ambitious products for the web across both desktop and mobile. This has provided a ring-side seat to a sweeping variety of teams, products, and technology stacks across more than 100 engagements.

While I'd like to be spending most of this time working through improvements to web APIs, the majority of time spent with partners goes to remediating performance and accessibility issues caused by "modern" frontend frameworks and the culture surrounding them. Today, these issues are most pronounced in React-based stacks.

This is disquieting because React is legacy technology, but it continues to appear in greenfield applications.

Surprisingly, some continue to insist that React is "modern." Perhaps we can square the circle if we understand "modern" to apply to React in the way it applies to art. Neither demonstrate contemporary design or construction. They are not built for current needs or performance standards, but stand as expensive objets harkening back to the peak of an earlier era's antiquated methods.

In the hope of steering the next team away from the rocks, I've found myself penning advocacy pieces and research into the state of play, as well as giving talks to alert managers and developers of the dangers of today's frontend orthodoxy.

In short, nobody should start a new project in the 2020s based on React. Full stop.1

Code that runs on the server can be fully costed. Performance and availability of server-side systems are under the control of the provisioning organisation, and latency can be actively managed.

Code that runs on the client, by contrast, is running on The Devil's Computer.2 Almost nothing about the latency, client resources, or even API availability are under the developer's control.

Client-side web development is perhaps best conceived of as influence-oriented programming. Once code has left the datacenter, all a web developer can do is send thoughts and prayers.

As a direct consequence, an unreasonably effective strategy is to send less code. Declarative forms generate more functional UI per byte sent. In practice, this means favouring HTML and CSS over JavaScript, as they degrade gracefully and feature higher compression ratios. These improvements in resilience and reductions in costs are beneficial in compounding ways over a site's lifetime.

Stacks based on React, Angular, and other legacy-oriented, desktop-focused JavaScript frameworks generally take the opposite bet. These ecosystems pay lip service the controls that are necessary to prevent horrific proliferations of unnecessary client-side cruft. The predictable consequence are NPM-amalgamated bundles full of redundancies like core-js, lodash, underscore, polyfills for browsers that no longer exist, userland ECC libraries, moment.js, and a hundred other horrors.

This culture is so out of hand that it seems 2024's React developers are constitutionally unable to build chatbots without including all of these 2010s holdovers, plus at least one extremely chonky MathML or TeX formatting library in the critical path to display an <input>. A tiny fraction of query responses need to display formulas — and yet.

Tech leads and managers need to break this spell. Ownership has to be created over decisions affecting the client. In practice, this means forbidding React in new work.

This question comes in two flavours that take some work to tease apart:

  • The narrow form:

    "Assuming we have a well-qualified need for client-side rendering, what specific technologies would you recommend instead of React?"

  • The broad form:

    "Our product stack has bet on React and the various mythologies that the cool kids talk about on React-centric podcasts. You're asking us to rethink the whole thing. Which silver bullet should we adopt instead?"

Teams that have grounded their product decisions appropriately can productively work through the narrow form by running truly objective bake offs.

Building multiple small PoCs to determine each approach's scaling factors and limits can even be a great deal of fun.3 It's the rewarding side of real engineering; trying out new materials in well-understood constraints to improve user outcomes.

Just the prep work to run bake offs tends to generate value. In most teams, constraints on tech stack decisions have materially shifted since they were last examined. For some, identifying use-cases reveals a reality that's vastly different than product managers and tech leads expect. Gathering data on these factors allows for first-pass cuts about stack choices, winnowing quickly to a smaller set of options to run bake offs for.4

But the teams we spend the most time with don't have good reasons to stick with client-side rendering in the first place.

Many folks asking "if not React, then what?" think they're asking in the narrow form but are grappling with the broader version. A shocking fraction of (decent, well-meaning) product managers and engineers haven't thought through the whys and wherefores of their architectures, opting instead to go with what's popular in a responsibility fire brigade.5

For some, provocations to abandon React create an unmoored feeling; a suspicion that they might not understand the world any more.6

Teams in this position are working through the epistemology of their values and decisions.7 How can they know their technology choices are better than the alternatives? Why should they pick one stack over another?

Many need help deciding which end of the telescope to examine frontend problems through because frameworkism has become the dominant creed of frontend discourse.

Frameworkism insists that all problems will be solved if teams just framework hard enough. This is non-sequitur, if not entirely backwards. In practice, the only thing that makes web experiences good is caring about the user experience — specifically, the experience of folks at the margins. Technologies come and go, but what always makes the difference is giving a toss about the user.

In less vulgar terms, the struggle is to convince managers and tech leads that they need to start with user needs. Or as Public Digital puts it, "design for user needs, not organisational convenience"

The essential component of this mindset shift is replacing hopes based on promises with constraints based on research and evidence. This aligns with what it means to commit wanton acts of engineering, because engineering is the practice of designing solutions to problems for users and society under known constraints.

The opposite of engineering is imagining that constraints do not exist, or do not apply to your product. The shorthand for this is "bullshit."

Ousting an engrained practice of bullshitting does not come easily. Frameworkism preaches that the way to improve user experiences is to adopt more (or different) tooling from within the framework's ecosystem. This provides adherents with something to do that looks plausibly like engineering, but isn't.

It can even become a totalising commitment; solutions to user problems outside the framework's expanded cinematic universe are unavailable to the frameworkist. Non-idiomatic patterns that unlock wins for users are bugs to be squashed, not insights to be celebrated. Without data or evidence to counterbalance the bullshit artists's assertions, who's to say they're wrong? So frameworkists root out and devalue practices that generate objective criteria in decision-making. Orthodoxy unmoored from measurement predictably spins into absurdity. Eventually, heresy carries heavy sanctions.

And it's all nonsense.

Realists do not wallow in abstraction-induced hallucinations about user experiences; they measure them. Realism requires reckoning with the world as it is, not as we wish it to be. In that way, realism is the opposite of frameworkism.

The most effective tools for breaking the spell are techniques that give managers a user-centred view of system performance. This can take the form of RUM data, such as Core Web Vitals (check yours now!), or lab results from well-configured test-benches (e.g., WPT). Instrumenting critical user journeys and talking through business goals are quick follow-ups that enable teams to seize the momentum and formulate business cases for change.

RUM and bench data sources are essential antidotes to frameworkism because they provide data-driven baselines to argue from, creating a shared observable reality. Instead of accepting the next increment of framework investment on faith, teams armed with data can weigh up the costs of fad chasing versus likely returns.

Prohibiting the spread of React (and other frameworkist totems) by policy is both an incredible cost savings and a helpful way to reorient teams towards delivery for users. However, better results only arrive once frameworkism itself is eliminated from decision-making. It's no good to spend the windfall from avoiding one sort of mistake on errors within the same category.

A general answer to the broad form of the problem has several parts:

  • User focus: decision-makers must accept that they are directly accountable for the results of their engineering choices. Either systems work well for users,8 including those at the margins, or they don't. Systems that do not perform must be replaced. No sacred cows, only problems to be solved with the appropriate application of constraints.

  • Evidence: the essential shared commitment between management and engineering is a dedication to realism. Better evidence must win.

  • Guardrails: policies must be implemented to ward off hallucinatory frameworkist assertions about how better experiences are delivered. Good examples of this include the UK Government Digital Service's requirement that services be built using progressive enhancement techniques. Organisations can tweak guidance as appropriate — e.g., creating an escalation path for exceptions — but the important thing is to set a baseline. Evidence boiled down into policy has power.

  • Bake Offs: no new system should be deployed without a clear list of critical user journeys. Those journeys embody what we users do most frequently, and once those definitions are in hand, teams can do bake offs to test how well various systems deliver given the constraints of the expected marginal user.

All of this casts the product manager's role in stark relief. Instead of suggesting an endless set of experiments to run (poorly), they must define a product thesis and commit to an understanding of what success means in terms of user success. This will be uncomfortable. It's also the job. Graciously accept the resignations of PMs who decide managing products is not in their wheelhouse.

To see how realism and frameworkism differ in practice, it's helpful to work a few examples. As background, recall that our rubric9 for choosing technologies is based on the number of incremental updates to primary data in a session. Some classes of app, like editors, feature long sessions and many incremental updates where a local data model can be helpful in supporting timely application of updates, but this is the exception.

Sites with short average sessions cannot afford much JS up-front.
Sites with short average sessions cannot afford much JS up-front.

It's only in these exceptional instances that SPA architectures should be considered.

Very few sites will meet the qualifications to be built as an SPA

And only when an SPA architecture is required should tools designed to support optimistic updates against a local data model — including "frontend frameworks" and "state management" tools — ever become part of a site's architecture.

The choice isn't between JavaScript frameworks, it's whether SPA-oriented tools should be entertained at all.

For most sites, the answer is clearly "no".

We can examine broad classes of site to understand why this is true:

Sites built to inform should almost always be built using semantic HTML with optional progressive enhancement as necessary.

Static site generation tools like Hugo, Astro, 11ty, and Jekyll work well for many of these cases. Sites that have content that changes more frequently should look to "classic" CMSes or tools like WordPress to generate HTML and CSS.

Blogs, marketing sites, company home pages, and public information sites should minimise client-side JavaScript to the greatest extent possible. They should never be built using frameworks that are designed to enable SPA architectures.10

Informational sites have short sessions and server-owned application data models; that is, the source of truth for what's displayed on the page is always the server's to manage and own. This means that there is no need for a client-side data model abstraction or client-side component definitions that might be updated from such a data model.

Note: many informational sites include productivity components as distinct sub-applications, which can be evaluated independently. For example, CMSes such as Wordpress are comprised of two distinct surfaces; post editors that are low-traffic but high-interactivity, and published pages, which are high-traffic, low-interactivity viewers. Progressive enhancement should be considered for both, but is an absolute must for reader views which do not feature long sessions.9:1

E-commerce sites should be built using server-generated semantic HTML and progressive enhancement.

A large and stable performance gap between Amazon and its React-based competitors demonstrates how poorly SPA architectures perform in e-commerce applications. More than 70% of Walmart's traffic is mobile, making their bet on Next.js particularly problematic for the business.
A large and stable performance gap between Amazon and its React-based competitors demonstrates how poorly SPA architectures perform in e-commerce applications. More than 70% of Walmart's traffic is mobile, making their bet on Next.js particularly problematic for the business.

Many tools are available to support this architecture. Teams building e-commerce experiences should prefer stacks that deliver no JavaScript by default, and buttress that with controls on client-side script to prevent regressions in material business metrics.

The general form of e-commerce sites has been stable for more than 20 years:

  • Landing pages with current offers and a search function for finding products.
  • Search results pages which allow for filtering and comparison of products.
  • Product-detail pages that host media about products, ratings, reviews, and recommendations for alternatives.
  • Cart management, checkout, and account management screens.

Across all of these page types, a pervasive login and cart status widget will be displayed. Sometimes this widget, and the site's logo, are the only consistent elements.

Long experience demonstrates very little shared data across these pages, highly variable session lengths, and a need for fresh content (e.g., prices) from the server. The best way to reduce latency in e-commerce sites is to optimise for lightweight, server-generated pages. Aggressive caching, image optimisation, and page-weight reduction strategies all help.

Media consumption sites vary considerably in session length and data update potential. Most should start as progressively-enhanced markup-based experiences, adding complexity over time as product changes warrant it.

Many interactive elements on media consumption sites can be modelled as distinct islands of interactivity (e.g., comment threads). Many of these components present independent data models and can therefore be constructed as progressively-enhanced Web Components within a larger (static) page.

This model breaks down when media playback must continue across media browsing (think "mini-player" UIs). A fundamental limitation of today's web platform is that it is not possible to preserve some elements from a page across top-level navigations. Sites that must support features like this should consider using SPA technologies while setting strict guardrails for the allowed size of client-side JS per page.

Another reason to consider client-side logic for a media consumption app is offline playback. Managing a local (Service Worker-backed) media cache requires application logic and a way to synchronise information with the server.

Lightweight SPA-oriented frameworks may be appropriate here, along with connection-state resilient data systems such as Zero or Y.js.

Social media apps feature significant variety in session lengths and media capabilities. Many present infinite-scroll interfaces and complex post editing affordances. These are natural dividing lines in a design that align well with session depth and client-vs-server data model locality.

Most social media experiences involve a small, fixed number of actions on top of a server-owned data model ("liking" posts, etc.) as well as distinct update phase for new media arriving at an interval. This model works well with a hybrid approach as is found in Hotwire and many HTMX applications.

Islands of deep interactivity may make sense in social media applications, and aggressive client-side caching (e.g., for draft posts) may aid in building engagement. It may be helpful to think of these as unique app sections with distinct needs from the main site's role in displaying content.

Offline support may be another reason to download a snapshot of user data to the client. This should be as part of an approach that builds resilience against flaky networks. Teams in this situation should consider a Service Worker-based, multi-page apps with "stream stitching". This allows sites to stick with HTML, while enabling offline-first logic and synchronisation. Because offline support is so invasive to an architecture, this requirement must be identified up-front.

Note: Many assume that SPA-enabling tools and frameworks are required to build compelling Progressive Web Apps that work well offline. This is not the case. PWAs can be built using stream-stitching architectures that apply the equivalent of server-side templating to data on the client, within a Service Worker.

With the advent of multi-page view transitions, MPA architecture PWAs can present fluid transitions between user states without heavyweight JavaScript bundles clogging up the main thread. It may take several more years for the framework community to digest the implications of these technologies, but they are available today and work exceedingly well, both as foundational architecture pieces and as progressive enhancements.

Document-centric productivity apps may be the hardest class to reason about, as collaborative editing, offline support, and lightweight "viewing" modes with full document fidelity are hard product requirements.

Triage-oriented experiences (e.g. email clients) are also prime candidates for the potential benefits of SPA-based technology. But as with all SPAs, the ability to deliver a better experience hinges both on session depth and up-front payload cost. It's easy to lose this race, as this blog has examined in the past.

Editors of all sorts are a natural fit for local data models and SPA-based architectures to support modifications to them. However, the endemic complexity of these systems ensures that performance will remain a constant struggle. As a result, teams building applications in this style should consider strong performance guardrails, identify critical user journeys up-front, and ensure that instrumentation is in place to ward off unpleasant performance surprises.

Editors frequently feature many updates to the same data (e.g., for every keystroke or mouse drag). Applying updates optimistically and only informing the server asynchronously of edits can deliver a superior experience across long editing sessions.

However, teams should be aware that editors may also perform double duty as viewers and that the weight of up-front bundles may not be reasonable for both cases. Worse, it can be hard to tease viewing sessions apart from heavy editing sessions at page load time.

Teams that succeed in these conditions build extreme discipline about the modularity, phasing, and order of delayed package loading based on user needs (e.g., only loading editor components users need when they require them). Teams that get stuck tend to fail to apply controls over which team members can approve changes to critical-path payloads.

Some types of apps are intrinsically interactive, focus on access to local device hardware, or center on manipulating media types that HTML doesn't handle intrinsically. Examples include 3D CAD systems, programming editors, game streaming services, web-based games, media-editing, and music-making systems. These constraints often make client-side JavaScript UIs a natural fit, but each should be evaluated critically:

  • What are the critical user journeys?
  • How long will average sessions be?
  • Do many updates to the same data take place in a session?
  • What metrics will we track to ensure that performance remains acceptable?
  • How will we place tight controls on critical-path script and other resources?

Success in these app classes is possible on the web, but extreme care is required.

A Word On Enterprise Software: Some of the worst performance disasters I've helped remediate are from a category we can think of, generously, as "enterprise line-of-business apps". Dashboards, worfklow systems, corporate chat apps, that sort of thing.

Teams building these excruciatingly slow apps often assert that "startup performance isn't important because people start our app in the morning and keep it open all day". At the limit, this can be true, but what this attempted deflection obscures is that performance is cultural. Teams that fail to define and measure critical user journeys (include loading) always fail to manage post-load interactivity too.

The old saying "how you do anything is how you do everything" is never more true than in software usability.

One consequence of cultures that fail to put the user first are products whose usability is so poor that attributes which didn't matter at the time of sale (like performance) become reasons to switch.

If you've ever had the distinct displeasure of using Concur or Workday, you'll understand what I mean. Challengers win business from them not by being wonderful, but simply by being usable. These incumbents are powerless to respond because their problems are now rooted deeply in the behaviours they rewarded through hiring and promotion along the way. The resulting management blindspot becomes a self-reinforcing norm that no single leader can shake.

This is why it's caustic to product success and brand value to allow a culture of disrespect towards users in favour of venerating developers (e.g., "DX"). The only antidote is to stamp it out wherever it arises by demanding user-focused realism in decision making.

To get unstuck, managers and tech leads that become wedded to frameworkism have to work through a series of easily falsified rationales offered by Over Reactors in service of their chosen ideology. Note, as you read, that none of these protests put the user experience front-and-centre. This admission by omission is a reliable property of the conversations these sketches are drawn from.

This chestnut should be answered with the question: "for how long?"

The dominant outcome of fling-stuff-together-with-NPM, feels-fine-on-my-$3K-laptop development is to get teams stuck in the mud much sooner than anyone expects.

From major accessibility defects to brand-risk levels of lousy performance, the consequence of this approach has been crossing my desk every week for a decade. The one thing I can tell you that all of these teams and products have in common is that they are not moving faster.

Brands you've heard of and websites you used this week have come in for help, which we've dutifully provided. The general prescription is "spend a few weeks/months unpicking this Gordian knot of JavaScript."

The time spent in remediation does fix the revenue and accessibility problems that JavaScript exuberance cause, but teams are dead in the water while they belatedly add ship gates and bundle size controls and processes to prevent further regression.

This necessary, painful, and expensive remediation generally comes at the worst time and with little support, owing to the JavaScript-industrial-complex's omerta. Managers trapped in these systems experience a sinking realisation that choices made in haste are not so easily revised. Complex, inscrutable tools introduced in the "move fast" phase are now systems that teams must dedicate time to learn, understand deeply, and affirmatively operate. All the while the pace of feature delivery is dramatically reduced.

This isn't what managers think they're signing up for when accepting "but we need to move fast!"

But let's take the assertion at face value and assume a team that won't get stuck in the ditch (🤞): the idea embedded in this statement is, roughly, that there isn't time to do it right (so React?), but there will be time to do it over.

This is in direct opposition to identifying product-market-fit. After all, the way to find who will want your product is to make it as widely available as possible, then to add UX flourishes.

Teams I've worked with are frequently astonished to find that removing barriers to use opens up new markets and leads to growth in parts of a world they had under-valued.

Now, if you're selling Veblen goods, by all means, prioritise anything but accessibility. But in literally every other category, the returns to quality can be best understood as clarity of product thesis. A low-quality experience — which is what is being proposed when React is offered as an expedient — is a drag on the core growth argument for your service. And if the goal is scale, rather than exclusivity, building for legacy desktop browsers that Microsoft won't even sell you at the cost of harming the experience for the majority of the world's users is a strategic error.

To a statistical certainty, you aren't making Facebook. Your problems likely look nothing like Facebook's early 2010s problems, and even if they did, following their lead is a terrible idea.

And these tools aren't even working for Facebook. They just happen to be a monopoly in various social categories and so can afford to light money on fire. If that doesn't describe your situation, it's best not to over index on narratives premised on Facebook's perceived success.

React developers are web developers. They have to operate in a world of CSS, HTML, JavaScript, and DOM. It's inescapable. This means that React is the most fungible layer in the stack. Moving between templating systems (which is what JSX is) is what web developers have done fluidly for more than 30 years. Even folks with deep expertise in, say, Rails and ERB, can easily knock out Django or Laravel or WordPress or 11ty sites. There are differences, sure, but every web developer is a polyglot.

React knowledge is also not particularly valuable. Any team familiar with React's...baroque...conventions can easily master Preact, Stencil, Svelte, Lit, FAST, Qwik, or any of a dozen faster, smaller, reactive client-side systems that demand less mental bookkeeping.

The tech industry has just seen many of the most talented, empathetic, and user-focused engineers I know laid off for no reason other than their management couldn't figure out that there would be some mean reversion post-pandemic. Which is to say, there's a fire sale on talent right now, and you can ask for whatever skills you damn well please and get good returns.

If you cannot attract folks who know web standards and fundamentals, reach out. I'll help you formulate recs, recruiting materials, hiring rubrics, and promotion guides to value these folks the way you should: unreasonably effective collaborators that will do incredible good for your products at a fraction of the cost of solving the next problem the React community is finally acknowledging that React caused.

But even if you decide you want to run interview loops to filter for React knowledge, that's not a good reason to use it! Anyone who can master the dark thicket of build tools, typescript foibles, and the million little ways that JSX's fork of HTML and JavaScript syntax trips folks up is absolutely good enough to work in a different system.

Heck, they're already working in an ever-shifting maze of faddish churn. The treadmill is real, which means that the question isn't "will these folks be able to hit the ground running?" (answer: no, they'll spend weeks learning your specific setup regardless), it's "what technologies will provide the highest ROI over the life of our team?"

Given the extremely high costs of React and other frameworkist prescriptions, the odds that this calculus will favour the current flavour of the week over the lifetime of even a single project are vanishingly small.

It makes me nauseous to hear managers denigrate talented engineers, and there seems to be a rash of it going around. The idea that folks who come out of bootcamps — folks who just paid to learn whatever was on the syllabus — aren't able or willing to pick up some alternative stack is bollocks.

Bootcamp grads might be junior, and they are generally steeped in varying strengths of frameworkism, but they're not stupid. They want to do a good job, and it's management's job to define what that is. Many new grads might know React, but they'll learn a dozen other tools along the way, and React is by far the most (unnecessarily) complex of the bunch. The idea that folks who have mastered the horrors of useMemo and friends can't take on board DOM lifecycle methods or the event loop or modern CSS is insulting. It's unfairly stigmatising and limits the organisation's potential.

In other words, definitionally atrocious management.

For more than a decade, the core premise of frameworkism has been that client-side resources are cheap (or are getting increasingly inexpensive) and that it is, therefore, reasonable to trade some end-user performance for developer convenience.

This has been an absolute debacle. Since at least 2012, the rise of mobile falsified this contention, and (as this blog has meticulously catalogued) we are only just starting to turn the corner.

Frameworkist assertion that "everyone has fast phones" is many things, but first and foremost it's an admission that the folks offering it don't know what they're talking about — and they hope you don't either.

No business trying to make it on the web can afford what they're selling, and you are under no obligation to offer your product as sacrifice to a false god.

This is, at best, a comforting fiction.

At worst, it's a knowing falsity that serves to omit the variability in React-based stacks because, you see, React isn't one thing. It's more of a lifestyle, complete with choices to make about React itself (function components or class components?) languages and compilers (typescript or nah?), package managers and dependency tools (npm? yarn? pnpm? turbo?), bundlers (webpack? esbuild? swc? rollup?), meta-tools (vite? turbopack? nx?), "state management" tools (redux? mobx? apollo? something that actually manages state?) and so on and so forth. And that's before we discuss plugins to support different CSS transpilation, among other optional side-quests frameworkists insist are necessary.

Across more than 100 consulting engagements, I've never seen two identical React setups, save smaller cases where folks had yet to change the defaults of Create React App — which itself changed dramatically over the years before finally being removed from the React docs as the best way to get started.

There's nothing standard about any of this. It's all change, all the time, and anyone who tells you differently is not to be trusted.

Hopefully, if you've made it this far, you'll forgive a digression into how the "React is industry standard" misdirection became so embedded.

Given the overwhelming evidence that this stuff isn't even working on the sites of the titular React poster children, how did we end up with React in so many nooks and crannies of contemporary frontend?

Pushy know-it-alls, that's how. Frameworkists have a way of hijacking every conversation with assertions like "virtual DOM is fast" without ever understanding anything about how browsers work, let alone the GC costs of their (extremely chatty) alternatives. This same ignorance allows them to confidently assert that React is "fine" when cheaper alternatives exist in every dimension.

These are not serious people. You do not have to entertain arguments offered without evidence. But you do have to oppose them and create data-driven structures that put users first. The long-term costs of these errors are enormous, as witnessed by the parade of teams needing our help to achieve minimally decent performance using stacks that were supposed to be "performant" (sic).

Which part, exactly? Be extremely specific. Which packages are so valuable, yet wedded entirely to React, that a team should not entertain alternatives? Do they really not work with Preact? How much money is exactly the right amount to burn to use these libraries? Because that's the debate.

Even if you get the benefits of "the ecosystem" at Time 0, why do you think that will continue to pay out at T+1? Or T+N?

Every library presents a separate, stochastic risk of abandonment. Even the most heavily used systems fall out of favour with the JavaScript-industrial-complex's in-crowd, stranding you in the same position as you'd have been in if you accepted ownership of more of your stack up-front, but with less experience and agency. Is that a good trade? Does your boss agree?

And how's that "CSS-in-JS" adventure working out? Still writing class components, or did you have a big forced (and partial) migration that's still creating headaches?

The truth is that every single package that is part of a repo's devDependencies is, or will be, fully owned by the consumer of the package. The only bulwark against uncomfortable surprises is to consider NPM dependencies a high-interest loan collateralized by future engineering capacity.

The best way to prevent these costs spiralling out of control is to fully examine and approve each and every dependency for UI tools and build systems. If your team is not comfortable agreeing to own, patch, and improve every single one of those systems, they should not be part of your stack.

Do you feel lucky, punk? Do you?

Because you'll have to be lucky to beat the odds.

Sites built with Next.js perform materially worse than those from HTML-first systems like 11ty, Astro, et al.

It simply does not scale, and the fact that it drags React behind it like a ball and chain is a double demerit. The chonktastic default payload of delay-loaded JS in any Next.js site will compete with ads and other business-critical deferred content for bandwidth, and that's before any custom components or routes are added. Even when using React Server Components. Which is to say, Next.js is a fast way to lose a lot of money while getting locked in to a VC-backed startup's proprietary APIs.

Next.js starts bad and only gets worse from a shocking baseline. No wonder the only Next sites that seem to perform well are those that enjoy overwhelmingly wealthy user bases, hand-tuning assistance from Vercel, or both.

So, do you feel lucky?

React Native is a good way to make a slow app that requires constant hand-tuning and an excellent way to make a terrible website. It has also been abandoned by it's poster children.

Companies that want to deliver compelling mobile experiences into app stores from the same codebase as their website are better served investigating Trusted Web Activities and PWABuilder. If those don't work, Capacitor and Cordova can deliver similar benefits. These approaches make most native capabilities available, but centralise UI investment on the web side, providing visibility and control via a single execution path. This, in turn, reduces duplicate optimisation and accessibility headaches.

These are essential guides for frontend realism. I recommend interested tech leads, engineering managers, and product managers digest them all:

These pieces are from teams and leaders that have succeeded in outrageously effective ways by applying the realist tenants of looking around for themselves and measuring. I wish you the same success.

Thanks to Mu-An Chiou, Hasan Ali, Josh Collinsworth, Ben Delarre, Katie Sylor-Miller, and Mary for their feedback on drafts of this post.


  1. Why not React? Dozens of reasons, but a shortlist must include:

    • React is legacy technology. It was built for a world where IE 6 still had measurable share, and it shows.
    • Virtual DOM was never fast.
      • React was forced to back away from misleading performance claims almost immediately.11
      • In addition to being unnecessary to achieve reactivity, React's diffing model and poor support for dataflow management conspire to regularly generate extra main-thread work in the critical path. The "solution" is to learn (and zealously apply) a set of extremely baroque, React-specific solutions to problems React itself causes.
      • The only (positive) contribution to performance that React's doubled-up work model can, in theory, provide is a structured lifecycle that helps programmers avoid reading back style and layout information at the moments when it's most expensive.
      • In practice, React does not prevent forced layouts and is not able to even warn about them. Unsurprisingly, every React app that crosses my desk is littered with layout thrashing bugs.
      • The only defensible performance claims Reactors make for their work-doubling system are phrased as a trade; e.g. "CPUs are fast enough now that we can afford to do work twice for developer convenience."
        • Except they aren't. CPUs stopped getting faster about the same time as Reactors began to perpetuate this myth. This did not stop them from pouring JS into the ecosystem as though the old trends had held, with predictably disasterous results Sales volumes of the high-end devices that continues to get faster stagnated over the past decade. Meanwhile, the low end exploded in volume whole remaining stubbornly fixed in performance.
        • It isn't even necessary to do all the work twice to get reactivity! Every other reactive component system from the past decade is significantly more efficient, weighs less on the wire, and preserves the advantages of reactivitiy without creating horrible "re-render debugging" hunts that take weeks away from getting things done.
    • React's thought leaders have been wrong about frontend's constraints for more than a decade.
    • The money you'll save can be measured in truck-loads.
      • Teams that correctly cabin complexity to the server side can avoid paying inflated salaries to begin with.
      • Teams that do build SPAs can more easily control the costs of those architectures by starting with a cheaper baseline and building a mature performance culture into their organisations from the start.
    • Not for nothing, but avoiding React will insulate your team from the assertion-heavy, data-light React discourse.

    Why pick a slow, backwards-looking framework whose architecture is compromised to serve legacy browsers when smaller, faster, better alternatives with all of the upsides (and none of the downsides) have been production-ready and successful for years?

  2. Frontend web development, like other types of client-side programming, is under-valued by "generalists" who do not respect just how freaking hard it is to deliver fluid, interactive experiences on devices you don't own and can't control. Web development turns this up to eleven, presenting a wicked effective compression format for UIs (HTML & CSS) but forces experiences to load at runtime across high-latency, narrowband connections. To low-end devices. With no control over which browser will execute the code.

    And yet, browsers and web developers frequently collude to deliver outstanding interactivity under these conditions. Often enough, that "generalists" don't give a second thought to the miracle of HTML-centric Wikipedia and MDN articles loading consistently quickly, as they gleefully clog those narrow pipes with JavaScript payloads so large that they can't possibly deliver similarly good experiences. All because they neither understand nor respect client-side constraints.

    It's enough to make thoughtful engineers tear their hair out.

  3. Tom Stoppard's classic quip that "it's not the voting that's democracy; it's the counting" chimes with the importance of impartial and objective criteria for judging the results of bake offs.

    I've witnessed more than my fair share of stacked-deck proof-of-concept pantomimes, often inside large organisations with tremendous resources and managers who say all the right things. But honesty demands more than lip service.

    Organisations looking for a complicated way to excuse pre-ordained outcomes should skip the charade. It will only make good people cynical and increase resistance. Teams that want to set bales of benajmins on fire because of frameworkism shouldn't be afraid to say what they want.

    They were going to get it anyway; warts and all.

  4. An example of easy cut lines for teams considering contemporary development might be browser support versus bundle size.

    In 2024, no new application will need to support IE or even legacy versions of Edge. They are not a measurable part of the ecosystem. This means that tools that took the design constraints imposed by IE as a given can be discarded from consideration. The extra client-side weight they require to service IE's quirks makes them uncompetitive from a bundle size perspective.

    This eliminates React, Angular, and Ember from consideration without a single line of code being written; a tremendous savings of time and effort.

    Another example is lock-in. Do systems support interoperability across tools and frameworks? Or will porting to a different system require a total rewrite? A decent proxy for this choice is Web Components support.

    Teams looking to avoid lock-in can remove frameworks from consideration that do not support Web Components as both an export and import format. This will still leave many contenders, and management can rest assured they will not leave the team high-and-dry.14

  5. The stories we hear when interviewing members of these teams have an unmistakable buck-passing flavour. Engineers will claim (without evidence) that React is a great13 choice for their blog/e-commerce/marketing-microsite because "it needs to be interactive" — by which they mean it has a Carousel and maybe a menu and some parallax scrolling. None of this is an argument for React per se, but it can sound plausible to managers who trust technical staff about technical matters.

    Others claim that "it's an SPA". But should it be a Single Page App? Most are unprepared to answer that question for the simple reason they haven't thought it through.9:2

    For their part, contemporary product managers seem to spend a great deal of time doing things that do not have any relationship to managing the essential qualities of their products.

    Most need help making sense of the RUM data already available to them. Few are in touch with device and network realities of their current and future (🤞) users. PMs that clearly articulate critical-user-journeys for their teams are like hen's teeth. And I can count on one hand teams that have run bake offs — without resorting to binary.

  6. It's no exaggeration to say that team leaders encountering evidence that their React (or Angular, etc.) technology choices are letting down users and the business go through some things.

    Following the herd is an adaptation to prevent their specific decisions from standing out — tall poppies and all that — and it's uncomfortable when those decisions receive belated scrutiny. But when the evidence is incontrovertible, needs must. This creates cognitive dissonance.

    Few are so entitled and callous that they wallow in denial. Most want to improve. They don't come to work every day to make a bad product; they just thought the herd knew more than they did. It's disorienting when that turns out not to be true. That's more than understandable.

    Leaders in this situation work through the stages of grief in ways that speak to their character.

    Strong teams own the reality and look for ways to learn more about their users and the constraints that should shape product choices. The goal isn't to justify another rewrite, but to find targets the team should work towards, breaking down the problem into actionable steps. This is hard and often unfamiliar work, but it is rewarding. Setting accurate goalposts helps teams take credit as they make progress remediating the current mess. These are all markers of teams on the way to improving their performance management maturity.

    Some get stuck in anger, bargaining, or depression. Sadly, these teams are taxing to help. Supporting engineers and PMs through emotional turmoil is a big part of a performance consultant's job. The stronger the team's attachment to React community narratives, the harder it can be to accept responsibility for defining team success in terms of user success. But that's the only way out of the deep hole they've dug.

    Consulting experts can only do so much. Tech leads and managers that continue to prioritise "Developer Experience" (without metrics, obviously) and "the ecosystem" (pray tell, which parts?) in lieu of user outcomes can remain beyond reach, no matter how much empathy and technical analysis is provided. Sometimes, you have to cut bait and hope time and the costs of ongoing failure create the necessary conditions for change.

  7. Most are substituting (perceived) popularity for the work of understanding users and their needs. Starting with user needs creates constraints to work backwards from.

    Instead of doing this work-back, many sub in short-term popularity contest winners. This goes hand-in-glove with a predictable failure to deeply understand business goals.

    It's common to hear stories of companies shocked to find the PHP/Python/etc. systems they are replacing with React will require multiples of currently allocated server resources for the same userbase. The impacts of inevitably worse client-side lag cost dearly, but only show up later. And all of these costs are on top of the salaries for the bloated teams frameworkists demand.

    One team shared that avoidance of React was tantamount to a trade secret. If their React-based competitors understood how expensive React stacks are, they'd lose their (considerable) margin advantage. Wild times.

  8. UIs that works well for all users aren't charity, they're hard-nosed business choices about market expansion and development cost.

    Don't be confused: every time a developer makes a claim without evidence that a site doesn't need to work well on a low-end device, understand it as a true threat to your product's success, if not your own career.

    The point of building a web experience is to maximize reach for the lowest development outlay, otherwise you'd build a bunch of native apps for every platform instead. Organisations that aren't spending bundles to build per-OS proprietary apps...well...aren't doing that. In this context, unbacked claims about why it's OK to exclude large swaths of the web market to introduce legacy desktop-era frameworks designed for browsers that don't exist any more work directly against strategy. Do not suffer them gladly.

    In most product categories, quality and reach are the product attributes web developers can impact most directly. It's wasteful, bordering insubbordinate, to suggest that not delivering those properties is an effective use of scarce funding.

  9. Should a site be built as a Single Page App?

    A good way to work this question is to ask "what's the point of an SPA?". The answer is that they can (in theory) reduce interaction latency, which implies many interactions per session. It's also an (implicit) claim about the costs of loading code up-front versus on-demand. This sets us up to create a rule of thumb.

    Should this site be built as a Single Page App? A decision tree. (hint: at best, maybe)

    Sites should only be built as SPAs, or with SPA-premised technologies if and only if:

    • They are known to have long sessions (more than ten minutes) on average
    • More than ten updates are applied to the same (primary) data

    This instantly disqualifies almost every e-commerce experience, for example, as sessions generally involve traversing pages with entirely different primary data rather than updating a subset of an existing UI. Most also feature average sessions that fail the length and depth tests. Other common categories (blogs, marketing sites, etc.) are even easier to disqualify. At most, these categories can stand a dose of progressive enhancement (but not too much!) owing to their shallow sessions.

    What's left? Productivity and social apps, mainly.

    Of course, there are many sites with bi-modal session types or sub-apps, all of which might involve different tradeoffs. For example, a blogging site is two distinct systems combined by a database/CMS. The first is a long-session, heavy interaction post-writing and editing interface for a small set of users. The other is a short-session interface for a much larger audience who mostly interact by loading a page and then scrolling. As the browser, not developer code, handles scrolling, we omit from interaction counts. For most sessions, this leaves us only a single data update (initial page load) to divide all costs by.

    If the denominator of our equation is always close to one, it's nearly impossible to justify extra weight in anticipation of updates that will likely never happen.12

    To formalise slightly, we can understand average latency as the sum of latencies in a session, divided by the number of interactions. For multi-page architectures, a session's average latency (Lavg) is simply a session's summed LCP's divided by the number of navigations in a session (N):

    L m avg = i = 1 N LCP ( i ) N

    SPAs need to add initial navigation latency to the latencies of all other session interactions (I). The total number of interactions in a session N is:

    N=1+I

    The general form is of SPA average latency is:

    L avg = latency ( navigation ) + i = 1 I latency ( i ) N

    We can handwave a bit and use INP for each individual update (via the Performance Timeline) as our measure of in-page update lag. This leaves some room for gamesmanship — the React ecosystem is famous for attempting to duck metrics accountability with scheduling shenanigans — so a real measurement system will need to substitute end-to-end action completion (including server latency) for INP, but this is a reasonable bootstrap.

    INP also helpfully omits scrolling unless the programmer does something problematic. This is correct for the purposes of metric construction as scrolling gestures are generally handled by the browser, not application code, and our metric should only measure what developers control. SPA average latency simplifies to:

    L s avg = LCP + i = 1 I INP ( i ) N

    As a metric for architecture, this is simplistic and fails to capture variance, which SPA defenders will argue matters greatly. How might we incorporate it?

    Variance (σ2) across a session is straightforward if we have logs of the latencies of all interactions and an understanding of latency distributions. Assuming latencies follows the Erlang distribution, we would have to work to assess variance, except that complete logs simplify this to the usual population variance formula. Standard deviation (σ) is then just the square root:

    σ 2 = ( x - μ ) 2 N

    Where μ is the mean (average) of the population X, the set of measured latencies in a session, with this value summed across all sessions.

    We can use these tools to compare architectures and their outcomes, particularly the effects of larger up-front payloads for SPA architecture for sites with shallow sessions. Suffice to say, the smaller the deonominator (i.e., the shorter the session), the worse average latency will be for JavaScript-oriented designs and the more sensitive variance will be to population-level effects of hardware and networks.

    A fuller exploration will have to wait for a separate post.

  10. Certain frameworkists will claim that their framework is fine for use in informational scenarios because their systems do "Server-Side Rendering" (a.k.a., "SSR").

    Parking for a moment discussion of the linguistic crime that "SSR" represents, we can reject these claims by substituting a test: does the tool in question send a copy of a library to support SPA navigations down the wire by default?

    This test is helpful, as it shows us that React-based tools like Next.js are wholly unsuitable for this class of site, while React-friendly tools like Astro are appropriate.

    We lack a name for this test today, and I hope readers will suggest one.

  11. React's initial claims of good performance because it used a virtual DOM were never true, and the React team was forced to retract them by 2015. But like many zombie ideas, there seems to have been no reduction in the rate of junior engineers regurgitating this long-falsified idea as a reason to continue to choose React.

    How did such a baldly incorrect claim come to be offered in the first place? The options are unappetising; either the React team knew their work-doubling machine was not fast but allowed others to think it was, or they didn't know but should have.15

    Neither suggest the sort of grounded technical leadership that developers or businesses should invest heavily in.

  12. It should go without saying, but sites that aren't SPAs shouldn't use tools that are premised entirely on optimistic updates to client-side data because sites that aren't SPAs shouldn't be paying the cost of creating a (separate, expensive) client-side data store separate from the DOM representation of HTML.

    Which is the long way of saying that if there's React or Angular in your blogware, 'ya done fucked up, son.

  13. When it's pointed out that React is, in fact, not great in these contexts, the excuses come fast and thick. It's generally less than 10 minutes before they're rehashing some variant of how some other site is fast (without traces to prove it, obvs), and it uses React, so React is fine.

    Thus begins an infinite regression of easily falsified premises.

    The folks dutifully shovelling this bullshit aren't consciously trying to invoke Brandolini's Law in their defence, but that's the net effect. It's exhausting and principally serves to convince the challenged party not that they should try to understand user needs and build to them, but instead that you're an asshole.

  14. Most managers pay lip service to the idea of preferring reversible decisions. Frustratingly, failure to put this into action is in complete alignment with social science research into the psychology of decision-making biases (open access PDF summary).

    The job of managers is to manage these biases. Working against them involves building processes and objective frames of reference to nullify their effects. It isn't particularly challenging, but it is work. Teams that do not build this discipline pay for it dearly, particularly on the front end, where we program the devil's computer.2:1

    But make no mistake: choosing React is a one-way door; an irreversible decision that is costly to relitigate. Teams that buy into React implicitly opt into leaky abstractions like timing quirks of React's unique (as in, nobody else has one because it's costly and slow) synthentic event system and non-portable concepts like portals. React-based products are stuck, and the paths out are challenging.

    This will seem comforting, but the long-run maintenance costs of being trapped in this decision are excruciatingly high. No wonder Over Reactors believe they should command a salary premium.

    Whatcha gonna do, switch?

  15. Where do I come down on this?

    My interactions with React team members over the years, combined with their confidently incorrect public statements about how browsers work, have convinced me that honest ignorance about their system's performance sat underneath misleading early claims.

    This was likely exascerbated by a competitive landscape in which their customers (web developers) were unable to judge the veracity of the assertions, and a deference to authority; surely Facebook wouldn't mislead folks?

    The need for an edge against Angular and other competitors also likely played a role. It's underappreciated how tenuous the position of frontend and client-side framework teams are within Big Tech companies. The Closure library and compiler that powered Google's most successful web apps (Gmail, Docs, Drive, Sheets, Maps, etc.) was not staffed for most of its history. It was literally a 20% project that the entire company depended on. For the React team to justify headcount within Facebook, public success was likely essential.

    Understood in context, I don't entirely excuse the React team for their early errors, but they are understandable. What's not forgivable are the material and willful omissions by Facebook's React team once the evidence of terrible performance began to accumulate. The React team took no responsibility, did not explain the constraints that Facebook applied to their JavaScript-based UIs to make them perform as well as they do — particularly on mobile — and benefited greatly from pervasive misconceptions that continue to cast React is a better light than hard evidence can support.