Reading List
11ty Hacks for Fun and Performance from Infrequently Noted RSS feed.
11ty Hacks for Fun and Performance
This blog really isn't just for beating up on Apple for the way it harms users, the web, standards, and society to maintain power and profits. So here's some fun stuff I've been doing in my 11ty setup to improve page performance.
Page-Specific Resources via Shortcodes and the 11ty Bundler
You know how it gets once you've got a mature 11ty setup: shortcodes proliferate, and some generate output that might depend on JS or CSS.
If your setup is anything like mine, you might also feel conflicted about including some of those resources globally. It's important to have them available for the posts that need them, but it's not great to bring a charting library, e.g., into every page when only 1 in 20 will need it.
This blog has grown many such shortcodes that generate expansions for features like:
-
plot.js
based charts
Here, for instance, is how a Vimeo embed looks in the markdown of a page, using Nunjucks for Markdown pre-processing:
{% vimeo "VIDEOID", "TITLE" %}
This expands to:
<lite-vimeo
videoid="VIDEOID"
videotitle="TITLE">
<a href="http://vimeo.com/VIDEOID">TITLE</a>
</lite-vimeo>
But this also requires script; e.g.:
<script type="module" async
src="/assets/js/lite-vimeo/lite-vimeo.js">
</script>
And the list keeps growing. To avoid having the weight of these components clogging up every page, a system for selectively pulling in their code would be helpful.
It'd also be grand if we could make sure scripts appear just once, even if shortcodes are invoked multiple times per page. Included code should preferably also be located towards the top of the document.
The 11ty Bundle plugin to the rescue!
Sort of.
At first glance, the 11ty Bundle plugin looks ideal for this, but we need to solve two problems the documentation doesn't cover:
-
Shortcodes should auto-include the scripts they need, but not over-include them. How can we use Bundle plugin provided shortcodes from within other shortcodes? This is important to avoid having every page needing to remember to include scripts.
-
How to make this work with templates that use pagination?
The first problem turns out to be very simple because 11ty shortcodes are themselves callable functions. Somewhere in my .eleventy.js
configuration, there's now a function like:
function addToBundle(scope, bundle, code) {
eleventyConfig.getPairedShortcode(bundle)
.call(scope, code);
}
This works because 11ty's internals are dogmatic about not binding this
, allowing tools like shortcodes to be regular 'ole functions that can be invoked from dynamic scopes via call()
and apply()
. This is awesome!
The Vimeo shortcode then calls addToBundle
to make sure that the JS it depends on will get loaded:
eleventyConfig.addShortcode("vimeo",
function(id, title="") {
addToBundle(this, "js", `
<script type="module" async
src="/assets/js/lite-vimeo/lite-vimeo.js">
</script>
`);
let titleAttr = title ? ` videotitle="${title}" ` : "";
return `
<lite-vimeo videoid="${id}" ${titleAttr}>
<a href="http://vimeo.com/${id}">${title}</a>
</lite-vimeo>
`;
});
The real code is a tad more complex to handle things like proper escaping, script file versioning, stripping whitespace to avoid markdown issues, etc…but not much. Page templates then include the usual:
<!DOCTYPE html>
<html>
<head>
<!-- ... -->
{% getBundle "js" %}
<!-- ... -->
</head>
<!-- ... -->
And this should be it!
Mo Pagination, Mo Problems
Except it didn't work on my homepage.
Why not? Mostly because the 11ty Bundle plugin was built for sane sites; projects where a single output page's content doesn't need to hoist bundle inputs from across multiple entries. But I'm using pagination, like a mug.
I've got a patch out that addresses this, hackily, and for now I'm using package.json
overrides to target that branch. It seems to be working well enough here:
{
"name": "infrequently.org",
// ...
"devDependencies": {
"@11ty/eleventy": "3.1.2",
"@11ty/eleventy-plugin-rss": "^2.0.4",
"@11ty/eleventy-plugin-syntaxhighlight": "^5.0.2",
// ...
},
"overrides": {
"@11ty/eleventy-plugin-bundle":
"https://github.com/slightlyoff/eleventy-plugin-bundle-pagination.git#pagination-aware"
},
// ...
}
Now I can build shortcodes that use other shortcodes to bundle code only for the pages that need it, trimming the default JS payload while leaving me free to build and use richer components as necessary.
Build Time Impact
The one downside of this approach has been an increase in build times.
I've worked to keep full re-builds under five seconds, and incremental builds under a second on my main writing device (a recent Chromebook). The Bundle plugin adds a post-processing phase (via @11ty/eleventy/html-transformer
) to builds which tack on two seconds to both scenarios. This blog generates ~1,500 pages in a build, so the per page hit isn't bad, but it's enough to be noticeable.
I will likely spend time getting this trimmed back down in the near future. If you've got a smaller site, I can recommend the bundler-with-generative-shortcodes approach. If your site is much larger, it may be worth adopting if you're already paying the price of a post-processing step. Otherwise, and as ever, it's worth measuring.
Scroll-Position Based Delayed Code Loading
Loading code only on the pages that need it is great, but you know what's even better? Only loading code when it's going to actually be needed.
For a lot of pages, it makes sense to load specific widgets only when users scroll down far enough to encounter their content. Normally this is the sort of thing folks lean on big frameworks to handle, but that's not how this blog rolls. Instead, we'll use an IntersectionObserver
and a MutationObserver
to:
-
Locate scripts that want deferred loading.
-
Use attributes on those elements to identify which elements they should be invoked for.
-
Watch the page scrolling and trigger the code loading when target elements get close enough to the viewport.
Taken together, this reduces code loaded up front that might otherwise contend with above-the-fold resources without sacrificing interactive features further down the page.
Here, for instance, was how some code from a recent blog post that needed charts looked before:
{% js %}
<script type="module"
src="/assets/js/d3.min.js"></script>
<script type="module"
src="/assets/js/plot.min.js"></script>
<script type="module">
// ...
genFeatureTypePlot("leading", true, {
caption: "Chromium launches ... ahead of other engines."
});
// ...
</script>
{% endjs %}
#### Leading Launches by Year
<div id="leading"></div>
The Bundle plugin makes it simple to write code near the elements that will target it and not worry about duplicates. That's helpful, and the bundle plugin provides many tools for choosing where to output the gathered-up bits, but what I really want is to delay the fetching of those scripts until the user might plausibly benefit from them.
Here's what the revised code looks like:
{% js %}
<script type="io+module" data-for="leading"
src="/assets/js/d3.min.js"></script>
<script type="io+module" data-for="leading"
src="/assets/js/plot.min.js"></script>
<script type="io+module" data-for="leading">
// ...
genFeatureTypePlot("leading", true, {
caption: "Chromium launches ... ahead of other engines."
});
// ...
</script>
{% endjs %}
#### Leading Launches by Year
<div id="leading"></div>
These scripts will still be hoisted into the <head>
, but they won't execute because they are using a type
attribute the browser doesn't recognise. The data-for
attribute provides an ID of the element to trigger loading of the script. These are enough to build a scroll-based loader with.
Our loader uses a bit of inline'd script at the top of the page to set up an IntersectionObserver
to watch scrolling, and a collaborating MutationObserver
to identify elements matching this description as the parser creates them.
Here's the meat of that snippet, loaded early in the document as a <script type="module">
. Pardon the pidgin JavaScript style; the last thing I want is a JS transpiler as part of builds, so bytes matter:
// Utilities to delay code until the next task
let rAF = requestAnimationFrame;
let doubleRaf = (func) => {
rAF(()=> { rAF(()=> { func(); }) });
};
// Bookkeeping
let ioScripts = new Set();
let ioScriptsFor = new Map();
let triggerIDs = new Set();
// Record which scripts should wait on which elements
let processScript = (s) => {
if(ioScripts.has(s)) { return; }
ioScripts.add(s);
let idFor = s.getAttribute("data-for");
let isf = ioScriptsFor.get(idFor);
if(!isf) {
isf = [];
ioScriptsFor.set(idFor, isf);
triggerIDs.add(idFor);
}
isf.push(s);
};
// Handle existing scripts before setting up
// the Mutation Observer
document.querySelectorAll(`script[type="io+module"]`)
.forEach(processScript);
// Preload element tempalate;
let plt = document.createElement("link");
plt.setAttribute("rel", "modulepreload");
// For a given element, begin loading scripts
// that were waiting on it.
let triggerScripts = async (id) => {
let scripts = ioScriptsFor.get(id);
let head = document.head;
for(let s of scripts) {
if(!s.src) { continue; }
// Get a preload request started
let pl = plt.cloneNode(true);
pl.setAttribute("href", s.src);
head.append(pl);
}
for(let s of scripts) {
// Clone because setting type alone does not
// trigger evaluation.
let sc = s.cloneNode(true);
// Set type to an executable value
sc.setAttribute("type", "module");
if(sc.src) { // External scripts
let lp = new Promise((res) => {
sc.onload = res;
});
head.append(sc);
await lp; // Serialisation handled above
} else { // Inline modules
head.append(sc);
}
}
};
let forObs = new IntersectionObserver(
(entries) => {
entries.forEach((e) => {
if(e.intersectionRatio > 0) {
forObs.unobserve(e.target);
doubleRaf(() => {
triggerScripts(e.target.id);
});
}
});
},
{
// If we boot far down the page, e.g. via back
// button scroll restoration, load eagerly.
// Else, watch two screens ahead:
rootMargin: "1000% 0px 200% 0px",
}
);
// When new elements are added, watch for scripts with the
// right type and elements that scripts are waiting on.
let documentMo = new MutationObserver((records) => {
for(let r of records) {
if(!r.addedNodes) { continue; }
for(let n of r.addedNodes) {
if(n.nodeType === 1) { // Elements only
if((n.tagName === "SCRIPT") &&
(n.type === "io+module")) {
processScript(n);
}
// If we find elements in the watch list,
// observe their scrolling relative to
// the viewport
if(n.id && triggerIDs.has(n.id)) {
forObs.observe(n);
}
}
}
}
});
documentMo.observe(document.documentElement, {
childList: true,
subtree: true,
});
That's the whole job done in just 110 lines of modern platform code, including utility functions and 20 lines of comments.
A few small tricks of note:
-
<link rel="modulepreload" href="...">
allows us to get network requests started without waiting on previous scripts to download, parse, and evaluate. -
Limiting support to modules enables us to get async, but ordered, execution semantics.
-
The early
querySelectorAll()
ensures hoisted script blocks that occur before the inline'd script are handled correctly.
As this was enough for the moment, I haven't implemented a few obvious improvements:
-
This technique could be combined with shortcodes-calling-the-bundler to create shortcodes that dynamically load their code based on scroll position and only on pages that need them.
-
The
id
-based system is a bit fugly and can easily be upgraded to use any simple CSS selector thatmatches()
supports.
Faster CSS Selectors for 11ty Syntax Highlighting
It's a small thing, but I do try to optimise the CSS selectors used on this blog, as it's element-heavy and doesn't encapsulate much of the style-recalculation time work with Shadow DOM.
It was something of a surprise, then, to find that use of the @11ty/eleventy-plugin-syntax-highlight
module's use of some basic styles cargo-culted many years ago was tanking style recalculation performance. How? Slow attribute rules:
code[class*="language-"],
pre[class*="language-"] {
color: #f8f8f2;
background: none;
text-shadow: 0 1px rgba(0, 0, 0, 0.3);
/* ... */
}
Thanks to the prevalence of inline <code>
blocks in my writing, the Selector Stats panel showed a fair few slow-path misses.
Thankfully, the fix is simple. In the CSS I switched to faster whole-attribute selectors:
code[highlighted],
pre[highlighted] {
color: #f8f8f2;
background: none;
text-shadow: 0 1px rgba(0, 0, 0, 0.3);
/* ... */
}
Next, I used the configuration options available in the plugin (which I only figured out from reading the source) to specify those attributes be added to the <pre>
s and <code>
s generated by the syntax highlighter:
import syntaxHighlight from
"@11ty/eleventy-plugin-syntaxhighlight";
// ...
eleventyConfig.addPlugin(syntaxHighlight, {
// For faster CSS selectors
preAttributes: { highlighted: "highlighted" },
codeAttributes: { highlighted: "highlighted" },
});
Now the browser doesn't have to attempt a slow substring search every time it encounters one of these elements.
There's more to do in terms of selector optimisation on this site, but this was a nice quick win.
Lean Rada's CSS-only Low-Quality Image Previews
The history is boring, but suffice to note that the <img>
helpers this site uses predate the official 11ty Image transform plugin, and are tuned to generate URLs that work with Netlify's Image CDN. This means I've also been responsible for generating my own previews for images that haven't loaded yet, and devising strategies for displaying and animating them.
This has been, by turns, fun and frustrating.
The system relies on (you guessed it) shortcodes which both produce a list of images without previews and consume cached preview data to inline the scaled-down versions. Historically this worked by using sharp to generate low-res WebP (then AVIF) base64-encoded values that got blurred. I played a bit with BlurHash and ThumbHash, but the need for a <canvas>
element was unattractive.
A better answer would have relied on CSS custom paint, but between Safari being famously rekt (and representing large fraction of this blog's readership), and the Paint Worklet context missing the ImageData()
constructor, it never felt like a workable approach.
But as of this year, there's a new kid in town: Lean Rada's badass CSS-only LQIP approach.
That system is now implemented, which does a lot to shrink the HTML payloads of pages, as well as speeding up raster of the resulting image previews. This is visible in detailed traces, where the layout phase no longer has to wait for a background threads to synchronously decode image literals.
It took a few weekends of playing around to get it going correctly, as the code linked from Lean's blog post is not what he's using now. The colourspace conversion code they're using is also inaccurate, and so attempts to replace it with color-space
produced visually incorrect results. Using the exact code they do for RGB-to-Lab conversion is necessary to generate the correct effect, and dialling in those differences was time-consuming.
Happy to make the code I use for this available upon request, but it's not amazing, and you really should go read Lean's blog post for yourself. It's a masterpiece.
Bonus Hack: Global Acronyms for Better <abbr>
s
I recently added markdown-it-abbr
to my configuration to make some technical writing a bit more accessible. Across a few posts, this ended up with a lot of repetition for terms like “W3C”, “IETF”, etc.
This was both a bit time-consuming and error-prone. What if, I wondered, it were possible to centralise them?
Turns out it's trivial! My setup pre-processes markdown as Nunjucks (via markdownTemplateEngine: "njk"
), which makes the full range of directives available, including…include
.
This means I can just create a single file with commonly used acronyms and include
it from every page; the physical location is _includes/acronyms.md
:
{% include "acronyms.md" %}
This doesn't improve performance, but has been hugely helpful for consistency.
Does It Work?
Putting it all together, what do we get?
On a page which previously saw contention between scripts for charts and above-the-fold fonts and initial layout work, the wins have been heartening on a low-spec test bench:
The long tent pole is now Netlify's pokey connection setup and TTFB, and I'm content (if not happy) with that.