Reading List
The most recent articles from a list of feeds I subscribe to.
The Cicada Principle, revisited with CSS variables
Many of today’s web crafters were not writing CSS at the time Alex Walker’s landmark article The Cicada Principle and Why it Matters to Web Designers was published in 2011. Last I heard of it was in 2016, when it was used in conjunction with blend modes to pseudo-randomize backgrounds even further.
So what is the Cicada Principle and how does it relate to web design in a nutshell? It boils down to: when using repeating elements (tiled backgrounds, different effects on multiple elements etc), using prime numbers for the size of the repeating unit maximizes the appearance of organic randomness. Note that this only works when the parameters you set are independent.
When I recently redesigned my blog, I ended up using a variation of the Cicada principle to pseudo-randomize the angles of code snippets. I didn’t think much of it until I saw this tweet:
https://twitter.com/StuRobson/status/1273904521132072960
This made me think: hey, maybe I should actually write a blog post about the technique. After all, the technique itself is useful for way more than angles on code snippets.
The main idea is simple: You write your main rule using CSS variables, and then use :nth-of-*()
rules to set these variables to something different every N items. If you use enough variables, and choose your Ns for them to be prime numbers, you reach a good appearance of pseudo-randomness with relatively small Ns.
In the case of code samples, I only have two different top cuts (going up or going down) and two different bottom cuts (same), which produce 2*2 = 4 different shapes. Since I only had four shapes, I wanted to maximize the pseudo-randomness of their order. A first attempt looks like this:
pre {
clip-path: polygon(var(--clip-top), var(--clip-bottom));
--clip-top: 0 0, 100% 2em;
--clip-bottom: 100% calc(100% - 1.5em), 0 100%;
}
pre:nth-of-type(odd) {
--clip-top: 0 2em, 100% 0;
}
pre:nth-of-type(3n + 1) {
--clip-bottom: 100% 100%, 0 calc(100% - 1.5em);
}
This way, the exact sequence of shapes repeats every 2 * 3 = 6 code snippets. Also, the alternative --clip-bottom
doesn’t really get the same visibility as the others, being present only 33.333% of the time. However, if we just add one more selector:
pre {
clip-path: polygon(var(--clip-top), var(--clip-bottom));
--clip-top: 0 0, 100% 2em;
--clip-bottom: 100% calc(100% - 1.5em), 0 100%;
}
pre:nth-of-type(odd) {
--clip-top: 0 2em, 100% 0;
}
pre:nth-of-type(3n + 1),
pre:nth-of-type(5n + 1) {
--clip-bottom: 100% 100%, 0 calc(100% - 1.5em);
}
Now the exact same sequence of shapes repeats every 2 * 3 * 5 = 30 code snippets, probably way more than I will have in any article. And it’s more fair to the alternate --clip-bottom
, which now gets 1/3 + 1/5 - 1/15 = 46.67%, which is almost as much as the alternate --clip-top
gets!
You can explore this effect in this codepen:
https://codepen.io/leaverou/pen/8541bfd3a42551f8845d668f29596ef9?editors=1100
Or, to better explore how different CSS creates different pseudo-randomness, you can use this content-less version with three variations:
https://codepen.io/leaverou/pen/NWxaPVx
Of course, the illusion of randomness is much better with more shapes, e.g. if we introduce a third type of edge we get 3 * 3 = 9 possible shapes:
https://codepen.io/leaverou/pen/dyGmbJJ?editors=1100
I also used primes 7 and 11, so that the sequence repeats every 77 items. In general, the larger primes you use, the better the illusion of randomness, but you need to include more selectors, which can get tedious.
Other examples
So this got me thinking: What else would this technique be cool on? Especially if we include more values as well, we can pseudo-randomize the result itself better, and not just the order of only 4 different results.
So I did a few experiments.
Pseudo-randomized color swatches
https://codepen.io/leaverou/pen/NWxXQKX
Pseudo-randomized color swatches, with variables for hue, saturation, and lightness.
https://codepen.io/leaverou/pen/RwrLPer
Which one looks more random? Why do you think that is?
Pseudo-randomized border-radius
Admittedly, this one can be done with just longhands, but since I realized this after I had already made it, I figured eh, I may as well include it 🤷🏽♀️
https://codepen.io/leaverou/pen/ZEQXOrd
It is also really cool when combined with pseudo-random colors (just hue this time):
https://codepen.io/leaverou/pen/oNbGzeE
Pseudo-randomized snowfall
Lots of things here:
- Using translate and transform together to animate them separately without resorting to CSS.registerPropery()
- Pseudo-randomized horizontal offset, animation-delay, font-size
- Technically we don’t need CSS variables to pseudo-randomize
font-size
, we can just set the property itself. However, variables enable us to pseudo-randomize it via a multiplier, in order to decouple the base font size from the pseudo-randomness, so we can edit them independently. And then we can use the same multiplier inanimation-duration
to make smaller snowflakes fall slower!
https://codepen.io/leaverou/pen/YzwrWvV?editors=1100
Conclusions
In general, the larger the primes you use, the better the illusion of randomness. With smaller primes, you will get more variation, but less appearance of randomness.
There are two main ways to use primes to create the illusion of randomness with :nth-child()
selectors:
The first way is to set each trait on :nth-child(pn + b)
where p is a prime that increases with each value and b is constant for each trait, like so:
:nth-child(3n + 1) { property1: value11; }
:nth-child(5n + 1) { property1: value12; }
:nth-child(7n + 1) { property1: value13; }
:nth-child(11n + 1) { property1: value14; }
...
:nth-child(3n + 2) { property2: value21; }
:nth-child(5n + 2) { property2: value22; }
:nth-child(7n + 2) { property2: value23; }
:nth-child(11n + 2) { property2: value24; }
...
The benefit of this approach is that you can have as few or as many values as you like. The drawback is that because primes are sparse, and become sparser as we go, you will have a lot of “holes” where your base value is applied.
The second way (which is more on par with the original Cicada principle) is to set each trait on :nth-child(pn + b)
where p is constant per trait, and b increases with each value:
:nth-child(5n + 1) { property1: value11; }
:nth-child(5n + 2) { property1: value12; }
:nth-child(5n + 3) { property1: value13; }
:nth-child(5n + 4) { property1: value14; }
...
:nth-child(7n + 1) { property2: value21; }
:nth-child(7n + 2) { property2: value22; }
:nth-child(7n + 3) { property2: value23; }
:nth-child(7n + 4) { property2: value24; }
...
This creates a better overall impression of randomness (especially if you order the values in a pseudo-random way too) without “holes”, but is more tedious, as you need as many values as the prime you’re using.
What other cool examples can you think of?
The Cicada Principle, revisited with CSS variables
Many of today’s web crafters were not writing CSS at the time Alex Walker’s landmark article The Cicada Principle and Why it Matters to Web Designers was published in 2011. Last I heard of it was in 2016, when it was used in conjunction with blend modes to pseudo-randomize backgrounds even further.
So what is the Cicada Principle and how does it relate to web design in a nutshell? It boils down to: when using repeating elements (tiled backgrounds, different effects on multiple elements etc), using prime numbers for the size of the repeating unit maximizes the appearance of organic randomness. Note that this only works when the parameters you set are independent.
When I recently redesigned my blog, I ended up using a variation of the Cicada principle to pseudo-randomize the angles of code snippets. I didn’t think much of it until I saw this tweet:
https://twitter.com/StuRobson/status/1273904521132072960
This made me think: hey, maybe I should actually write a blog post about the technique. After all, the technique itself is useful for way more than angles on code snippets.
The main idea is simple: You write your main rule using CSS variables, and then use :nth-of-*()
rules to set these variables to something different every N items. If you use enough variables, and choose your Ns for them to be prime numbers, you reach a good appearance of pseudo-randomness with relatively small Ns.
In the case of code samples, I only have two different top cuts (going up or going down) and two different bottom cuts (same), which produce 2*2 = 4 different shapes. Since I only had four shapes, I wanted to maximize the pseudo-randomness of their order. A first attempt looks like this:
pre {
clip-path: polygon(var(--clip-top), var(--clip-bottom));
--clip-top: 0 0, 100% 2em;
--clip-bottom: 100% calc(100% - 1.5em), 0 100%;
}
pre:nth-of-type(odd) {
--clip-top: 0 2em, 100% 0;
}
pre:nth-of-type(3n + 1) {
--clip-bottom: 100% 100%, 0 calc(100% - 1.5em);
}
This way, the exact sequence of shapes repeats every 2 * 3 = 6 code snippets. Also, the alternative --clip-bottom
doesn’t really get the same visibility as the others, being present only 33.333% of the time. However, if we just add one more selector:
pre {
clip-path: polygon(var(--clip-top), var(--clip-bottom));
--clip-top: 0 0, 100% 2em;
--clip-bottom: 100% calc(100% - 1.5em), 0 100%;
}
pre:nth-of-type(odd) {
--clip-top: 0 2em, 100% 0;
}
pre:nth-of-type(3n + 1),
pre:nth-of-type(5n + 1) {
--clip-bottom: 100% 100%, 0 calc(100% - 1.5em);
}
Now the exact same sequence of shapes repeats every 2 * 3 * 5 = 30 code snippets, probably way more than I will have in any article. And it’s more fair to the alternate --clip-bottom
, which now gets 1/3 + 1/5 - 1/15 = 46.67%, which is almost as much as the alternate --clip-top
gets!
You can explore this effect in this codepen:
https://codepen.io/leaverou/pen/8541bfd3a42551f8845d668f29596ef9?editors=1100
Or, to better explore how different CSS creates different pseudo-randomness, you can use this content-less version with three variations:
https://codepen.io/leaverou/pen/NWxaPVx
Of course, the illusion of randomness is much better with more shapes, e.g. if we introduce a third type of edge we get 3 * 3 = 9 possible shapes:
https://codepen.io/leaverou/pen/dyGmbJJ?editors=1100
I also used primes 7 and 11, so that the sequence repeats every 77 items. In general, the larger primes you use, the better the illusion of randomness, but you need to include more selectors, which can get tedious.
Other examples
So this got me thinking: What else would this technique be cool on? Especially if we include more values as well, we can pseudo-randomize the result itself better, and not just the order of only 4 different results.
So I did a few experiments.
Pseudo-randomized color swatches
https://codepen.io/leaverou/pen/NWxXQKX
Pseudo-randomized color swatches, with variables for hue, saturation, and lightness.
https://codepen.io/leaverou/pen/RwrLPer
Which one looks more random? Why do you think that is?
Pseudo-randomized border-radius
Admittedly, this one can be done with just longhands, but since I realized this after I had already made it, I figured eh, I may as well include it 🤷🏽♀️
https://codepen.io/leaverou/pen/ZEQXOrd
It is also really cool when combined with pseudo-random colors (just hue this time):
https://codepen.io/leaverou/pen/oNbGzeE
Pseudo-randomized snowfall
Lots of things here:
- Using translate and transform together to animate them separately without resorting to CSS.registerPropery()
- Pseudo-randomized horizontal offset, animation-delay, font-size
- Technically we don’t need CSS variables to pseudo-randomize
font-size
, we can just set the property itself. However, variables enable us to pseudo-randomize it via a multiplier, in order to decouple the base font size from the pseudo-randomness, so we can edit them independently. And then we can use the same multiplier inanimation-duration
to make smaller snowflakes fall slower!
https://codepen.io/leaverou/pen/YzwrWvV?editors=1100
Conclusions
In general, the larger the primes you use, the better the illusion of randomness. With smaller primes, you will get more variation, but less appearance of randomness.
There are two main ways to use primes to create the illusion of randomness with :nth-child()
selectors:
The first way is to set each trait on :nth-child(pn + b)
where p is a prime that increases with each value and b is constant for each trait, like so:
:nth-child(3n + 1) { property1: value11; }
:nth-child(5n + 1) { property1: value12; }
:nth-child(7n + 1) { property1: value13; }
:nth-child(11n + 1) { property1: value14; }
...
:nth-child(3n + 2) { property2: value21; }
:nth-child(5n + 2) { property2: value22; }
:nth-child(7n + 2) { property2: value23; }
:nth-child(11n + 2) { property2: value24; }
...
The benefit of this approach is that you can have as few or as many values as you like. The drawback is that because primes are sparse, and become sparser as we go, you will have a lot of “holes” where your base value is applied.
The second way (which is more on par with the original Cicada principle) is to set each trait on :nth-child(pn + b)
where p is constant per trait, and b increases with each value:
:nth-child(5n + 1) { property1: value11; }
:nth-child(5n + 2) { property1: value12; }
:nth-child(5n + 3) { property1: value13; }
:nth-child(5n + 4) { property1: value14; }
...
:nth-child(7n + 1) { property2: value21; }
:nth-child(7n + 2) { property2: value22; }
:nth-child(7n + 3) { property2: value23; }
:nth-child(7n + 4) { property2: value24; }
...
This creates a better overall impression of randomness (especially if you order the values in a pseudo-random way too) without “holes”, but is more tedious, as you need as many values as the prime you’re using.
What other cool examples can you think of?
Refactoring optional chaining into a large codebase: lessons learned
Refactoring optional chaining into a large codebase: lessons learned
Refactoring optional chaining into a large codebase: lessons learned
Chinese translation by Coink Wang
Now that optional chaining is supported across the board, I decided to finally refactor Mavo to use it (yes, yes, we do provide a transpiled version as well for older browsers, settle down). This is a moment I have been waiting for a long time, as I think optional chaining is the single most substantial JS syntax improvement since arrow functions and template strings. Yes, I think it’s more significant than async/await, just because of the mere frequency of code it improves. Property access is literally everywhere.
First off, what is optional chaining, in case you haven’t heard of it before?
You know how you can’t just do foo.bar.baz()
without checking if foo
exists, and then if foo.bar
exists, and then if foo.bar.baz
exists because you’ll get an error? So you have to do something awkward like:
if (foo && foo.bar && foo.bar.baz) {
foo.bar.baz();
}
Or even:
foo && foo.bar && foo.bar.baz && foo.bar.baz();
Some even contort object destructuring to help with this. With optional chaining, you can just do this:
foo?.bar?.baz?.()
It supports normal property access, brackets (foo?.[bar]
), and even function invocation (foo?.()
). Sweet, right??
Yes, mostly. Indeed, there is SO MUCH code that can be simplified with it, it’s incredible. But there are a few caveats.
Patterns to search for
Suppose you decided to go ahead and refactor your code as well. What to look for?
There is of course the obvious foo && foo.bar
that becomes foo?.bar
.
There is also the conditional version of it, that we described in the beginning of this article, which uses if()
for some or all of the checks in the chain.
There are also a few more patterns.
Ternary
foo? foo.bar : defaultValue
Which can now be written as:
foo?.bar || defaultValue
or, using the other awesome new operator, the nullish coalescing operator:
foo?.bar ?? defaultValue
Array checking
if (foo.length > 3) {
foo[2]
}
which now becomes:
foo?.[2]
Note that this is no substitute for a real array check, like the one done by Array.isArray(foo)
. Do not go about replacing proper array checking with duck typing because it’s shorter. We stopped doing that over a decade ago.
Regex match
Forget about things like this:
let match = "#C0FFEE".match(/#([A-Z]+)/i);
let hex = match && match[1];
Or even things like that:
let hex = ("#C0FFEE".match(/#([A-Z]+)/i) || [,])[1];
Now it’s just:
let hex = "#C0FFEE".match(/#([A-Z]+)/i)?.[1];
In our case, I was able to even remove two utility functions and replace their invocations with this.
Feature detection
In simple cases, feature detection can be replaced by ?.
. For example:
if (element.prepend) element.prepend(otherElement);
becomes:
element.prepend?.(otherElement);
Don’t overdo it
While it may be tempting to convert code like this:
if (foo) {
something(foo.bar);
somethingElse(foo.baz);
andOneLastThing(foo.yolo);
}
to this:
something(foo?.bar);
somethingElse(foo?.baz);
andOneLastThing(foo?.yolo);
Don’t. You’re essentially having the JS runtime check foo
three times instead of one. You may argue these things don’t matter much anymore performance-wise, but it’s the same repetition for the human reading your code: they have to mentally process the check for foo
three times instead of one. And if they need to add another statement using property access on foo
, they need to add yet another check, instead of just using the conditional that’s already there.
Caveats
You still need to check before assignment
You may be tempted to convert things like:
if (foo && foo.bar) {
foo.bar.baz = someValue;
}
to:
foo?.bar?.baz = someValue;
Unfortunately, that’s not possible and will error. This was an actual snippet from our codebase:
if (this.bar && this.bar.edit) {
this.bar.edit.textContent = this._("edit");
}
Which I happily refactored to:
if (this.bar?.edit) {
this.bar.edit.textContent = this._("edit");
}
All good so far, this works nicely. But then I thought, wait a second… do I need the conditional at all? Maybe I can just do this:
this.bar?.edit?.textContent = this._("edit");
Nope. Uncaught SyntaxError: Invalid left-hand side in assignment
. Can’t do that. You still need the conditional. I literally kept doing this, and I’m glad I had ESLint in my editor to warn me about it without having to actually run the code.
It’s very easy to put the ?. in the wrong place or forget some ?.
Note that if you’re refactoring a long chain with optional chaining, you often need to insert multiple ?.
after the first one, for every member access that may or may not exist, otherwise you will get errors once the optional chaining returns undefined.
Or, sometimes you may think you do, because you put the ?.
in the wrong place.
Take the following real example. I originally refactored this:
this.children[index]? this.children[index].element : this.marker
into this:
this.children?.[index].element ?? this.marker
then got a TypeError: Cannot read property 'element' of undefined
. Oops! Then I fixed it by adding an additional ?.
:
this.children?.[index]?.element ?? this.marker
This works, but is superfluous, as pointed out in the comments. I just needed to move the ?.
:
this.children.[index]?.element ?? this.marker
Note that as pointed out in the comments be careful about replacing array length checks with optional access to the index. This might be bad for performance, because out-of-bounds access on an array is de-optimizing the code in V8 (as it has to check the prototype chain for such a property too, not only decide that there is no such index in the array).
It can introduce bugs if you’re not careful
If, like me, you go on a refactoring spree, it’s easy after a certain point to just introduce optional chaining in places where it actually ends up changing what your code does and introducing subtle bugs.
null vs undefined
Possibly the most common pattern is replacing foo && foo.bar
with foo?.bar
. While in most cases these work equivalently, this is not true for every case. When foo
is null
, the former returns null
, whereas the latter returns undefined
. This can cause bugs to creep up in cases where the distinction matters and is probably the most common way to introduce bugs with this type of refactoring.
Equality checks
Be careful about converting code like this:
if (foo && bar && foo.prop1 === bar.prop2) { /* ... */ }
to code like this:
if (foo?.prop1 === bar?.prop2) { /* ... */ }
In the first case, the condition will not be true, unless both foo
and bar
are truthy. However, in the second case, if both foo
and bar
are nullish, the conditional will be true, because both operands will return undefined
!
The same bug can creep in even if the second operand doesn’t include any optional chaining, as long as it could be undefined
you can get unintended matches.
Operator precedence slips
One thing to look out for is that optional chaining has higher precedence than &&
. This becomes particularly significant when you replace an expression using &&
that also involves equality checks, since the (in)equality operators are sandwiched between ?.
and &&
, having lower precedence than the former and higher than the latter.
if (foo && foo.bar === baz) { /* ... */ }
What is compared with baz
here? foo.bar
or foo && foo.bar
? Since &&
has lower precedence than ===
, it’s as if we had written:
if (foo && (foo.bar === baz)) { /* ... */ }
Note that the conditional cannot ever be executed if foo
is falsy. However, once we refactor it to use optional chaining, it is now as if we were comparing (foo && foo.bar
) to baz
:
if (foo?.bar === baz) { /* ... */ }
An obvious case where the different semantics affect execution is when baz
is undefined
. In that case, we can enter the conditional when foo
is nullish, since then optional chaining will return undefined
, which is basically the case we described above. In most other cases this doesn’t make a big difference. It can however be pretty bad when instead of an equality operator, you have an inequality operator, which still has the same precedence. Compare this:
if (foo && foo.bar !== baz) { /* ... */ }
with this:
if (foo?.bar !== baz) { /* ... */ }
Now, we are going to enter the conditional every time foo
is nullish, as long as baz
is not undefined
! The difference is not noticeable in an edge case anymore, but in the average case! 😱
Return statements
Rather obvious after you think about it, but it’s easy to forget return statements in the heat of the moment. You cannot replace things like this:
if (foo && foo.bar) {
return foo.bar();
}
with:
return foo?.bar?.();
In the first case, you return conditionally, whereas in the second case you return always. This will not introduce any issues if the conditional is the last statement in your function, but it will change the control flow if it’s not.
Sometimes, it can fix bugs too!
Take a look at this code I encountered during my refactoring:
/**
* Get the current value of a CSS property on an element
*/
getStyle: (element, property) => {
if (element) {
var value = getComputedStyle(element).getPropertyValue(property);
if (value) {
return value.trim();
}
}
},
Can you spot the bug? If value
is an empty string (and given the context, it could very well be), the function will return undefined
, because an empty string is falsy! Rewriting it to use optional chaining fixes this:
if (element) {
var value = getComputedStyle(element).getPropertyValue(property);
return value?.trim();
}
Now, if value
is the empty string, it will still return an empty string and it will only return undefined
when value
is nullish.
Finding usages becomes trickier
This was pointed out by Razvan Caliman on Twitter:
https://twitter.com/razvancaliman/status/1273638529399230464
Bottom line
In the end, this refactor made Mavo about 2KB lighter and saved 37 lines of code. It did however make the transpiled version 79 lines and 9KB (!) heavier.
Here is the relevant commit, for your perusal. I tried my best to exercise restraint and not introduce any unrelated refactoring in this commit, so that the diff is chock-full of optional chaining examples. It has 104 additions and 141 deletions, so I’d wager it has about 100 examples of optional chaining in practice. Hope it’s helpful!