Reading List
The most recent articles from a list of feeds I subscribe to.
State of HTML 2023 now open!
tl;dr the brand new State of HTML survey is finally open!
Take State of HTML 2023 Survey
Benefits to you:
- Survey results are used by browsers to prioritize roadmaps — the reason Google is funding this. Time spent thoughtfully filling them out is an investment that can come back to you tenfold in the form of seeing features you care about implemented, browser incompatibilities being prioritized, and gaps in the platform being addressed.
- In addition to browsers, several standards groups are also using the results for prioritization and decision-making.
- Learn about new and upcoming features you may have missed; add features to your reading list and get a list of resources at the end!
- Get a personalized score and see how you compare to other respondents
- Learn about the latest trends in the ecosystem and what other developers are focusing on
While the survey will be open for 3 weeks, responses entered within the first 9 days (until October 1st) will have a much higher impact on the Web, as preliminary data will be used to inform Interop 2024 proposals.
The State of HTML logo, designed by Chris Kirk-Nielsen, who I think surpassed himself with this one!
Background
This is likely the most ambitious Devographics survey to date. For the past couple of months, I’ve been hard at work leading a small product team spread across three continents (2am to 8am became my second work shift 😅). We embarked on this mission with some uncertainty about whether there were enough features for a State of HTML survey, but quickly found ourselves with the opposite problem: there were too many, all with good reasons for inclusion! To help weigh the tradeoffs and decide what makes the cut we consulted both the developer community, as well as stakeholders across browsers, standards groups, community groups, and more.
We even designed new UI controls to facilitate collecting the types of complex data that were needed without making the questions too taxing, and did original UX research to validate them. Once the dust settles, I plan to write separate blog posts about some of these.
FAQ
Can I edit my responses?
Absolutely! Do not worry about filling it out perfectly in one go. If you create an account, you can edit your responses for the whole period the survey is open, and even split filling it out across multiple devices (e.g. start on your phone, then fill out some on your desktop, etc.) Even if you’re filling it out anonymously, you can still edit responses on your device for a while. You could even start anonymously and create an account later, and your responses will be preserved (the only issue is filling it out anonymously, then logging in with an existing account).
So, perhaps the call to action above should be…
Start State of HTML 2023 Survey
Why are there JS questions in an HTML survey?
For the same reason there are JS APIs in the HTML standard: many JS APIs are intrinsically related to HTML. We mainly included JS APIs in the following areas:
- APIs used to manipulate HTML dynamically (DOM, form validation, etc.)
- Web Components APIs, used to create custom HTML elements
- APIs used to create web apps that feel like native apps (e.g. Service Workers, Web App Manifest, etc.)
If you don’t write any JS, we absolutely still want to hear from you! In fact, I would encourage you even more strongly to fill out the survey: we need to hear from folks who don’t write JS, as they are often underrepresented. Please feel free to skip any JS-related questions (all questions are optional anyway) or select that you have never heard these features. There is a question at the end, where you can select that you only write HTML/CSS:
Is the survey only available in English?
Absolutely not! Localization has been an integral part of these surveys since the beginning. Fun fact: Nobody in the core State of HTML team is a native English speaker.
Each survey gets (at least partially) translated to over 30 languages.
However, since translations are a community effort, they are not necessarily complete, especially in the beginning. If you are a native speaker of a language that is not yet complete, please consider helping out!
What does my score mean?
Previous surveys reported score as a percentage: “You have heard or used X out of Y features mentioned in the survey”. This one did too at first:
This was my own score when the survey first launched, and I created the darn survey 😅 Our engineer, Sacha who is also the founder of Devographics got 19%!
These were a lot lower for this survey, for two reasons:
- It asks about a lot of cutting edge features, more than the other surveys. As I mentioned above, we had a lot of difficult tradeoffs to make, and had to cut a ton of features that were otherwise a great fit. We err’ed on the side of more cutting edge features, as those are the areas the survey can help make the most difference in the ecosystem.
- To save on space, and be able to ask about more features, we used a new compact format for some of the more stable features, which only asks about usage, not awareness.
Here is an example from the first section of the survey (Forms):
However, this means that if you have never used a feature, it does not count towards your score, even if you have been aware of it for years. It therefore felt unfair to many to report that you’ve “heard or used” X% of features, when there was no way to express that you have heard 89 out of 131 of them!
To address this, we changed the score to be a sum of points, a bit like a video game: each used feature is worth 10 points, each known feature is worth 5 points.
Since the new score is harder to interpret by itself and only makes sense in comparison to others, we also show your rank among other participants, to make this easier.
My score after the change. If you have already taken the survey, you can just revisit it (with the same device & browser if filled it in anonymously) and go straight to the finish page to see your new score and ranking!
I found a bug, what should I do?
Please file an issue so we can fix it!
Acknowledgements
This survey would not have been possible without the hard work of many people. Besides myself (Lea Verou), this includes the rest of the team:
- Engineering team: Sacha Greif, Eric Burel
- UX research & data science team: Shaine Rosewel Matala, Michael Quiapos, Gio Vernell Quiogue
- Our logo designer, Chris Kirk-Nielsen
And several volunteers:
- Léonie Watson for accessibility feedback
- Our usability testing participants
- …and all folks who provided early feedback throuhgout the process
Last but not least, Kadir Topal made the survey possible in the first place, by proposing it and securing funding from Google.
Thank you all! 🙏🏼
Press coverage (selected)
- Turns out I know less about HTML than I thought! 😅 - Kevin Powell (Video)
- Are you an HTML expert? Find out with the new State of HTML 2023 survey - dev.to
- Chris’ Corner: Things I Totally Didn’t Know About That I Learned From Taking the State of HTML 2023 Survey
- Frontend Focus
- CSS Weekly
- The HTML Blog
Numbers or Brackets for numeric questions?
As you may know, this summer I am leading the design of the inaugural State of HTML survey. Naturally, I am also exploring ways to improve both survey UX, as well as all questions.
Shaine Madala, a data scientist working on the survey design team proposed using numerical inputs instead of brackets for the income question. While I was initially against it, I decided to explore this a bit further, which changed my opinion.
There are actually four demographics questions in State of X surveys where the answer is essentially a number, yet we ask respondents to select a bracket: age, years of experience, company size, and income.
The arguments for brackets are:
- They are more privacy preserving for sensitive questions (e.g. people may feel more comfortable sharing an income bracket than their actual income)
- They are more efficient to input (one click vs homing to keyboard and hitting several keystrokes).
- In some cases respondents may not know the precise number offhand (e.g. company size)
The arguments for numerical input are:
- Depending on the specifics, these can actually be faster to answer overall since they involve lower cognitive overhead (for known numbers).
- The brackets are applied at the analysis stage, so they can be designed to provide a better overview of the dataset
- More elaborate statistics can be computed (e.g. averages, medians, stdevs, the sky is the limit)
Which one is faster?
We can actually calculate this! Average reading speed for non-fiction is around 240 wpm (= 250ms/word) [1] Therefore, we can approximate reading time for each question by multiplying number of brackets × average words per bracket (wpb) × 250ms.
However, this assumes the respondent reads all brackets from top to bottom, but this is a rare worst case scenario. Usually they stop reading once they find the bracket that matches their answer, and they may even skip some brackets, performing a sort of manual binary search. We should probably halve these times to get a more realistic estimate.
Average typing speed is 200 cpm [2] (≈ 300ms/character). This means we can approximate typing time for each question by multiplying the number of digits on average × 300ms.
Let’s see how this works out for each question:
Question | Brackets | WPB | Reading time | Avg Digits | Typing time |
---|---|---|---|---|---|
Age | 8 | 4 | 4s | 2 | 0.6s |
Years of Experience | 6 | 4 | 3s | 2 | 0.6s |
Company size | 9 | 4 | 4.5s | 3 | 0.9s |
Income | 7 | 2 | 1.75s | 5 | 1.5s |
As you can see, despite our initial intuition that brackets are faster, the time it takes to read each bracketed question vastly outweighs typing time for all questions!
Of course, this is a simplification. There are models in HCI, such as KLM that can more accurately estimate the time it takes for certain UI flows. We even taught some of these to MIT students in 6.813, as well as its successor.
For example, here are some of the variables we left out in our analysis above:
- When answering with numerical input, most users need to home from mouse to keyboard, which takes time (estimated as 0.4s in KLM) and then focus the input so they can write in it, which takes an additional click (estimated as 0.2s in KLM)
- When answering with brackets, users need to move the mouse to the correct bracket, which takes time (KLM estimates all pointing tasks as a flat 1.1s, but this can be more accurately estimated using Fitts’ Law)
- We are assuming that the decision is instantaneous, but doing the mental math of comparing the number in your head to the bracket numbers also takes time.
However, given the vast difference in times, I don’t think a more accurate model would change the conclusion much.
Note that this analysis is based on a desktop interface, primarily because it’s easier (most of these models were developed before mobile was widespread, e.g. KLM was invented in 1978!) Mobile would require a separate calculation taking into account the specifics of mobile interaction (e.g. the time it takes for the keyboard to pop up), though the same logic applies. (thanks Tim for this exellent question!)
What about sliders?
Sliders are uncommon in surveys, and for good reason. They offer the most benefit in UIs where changes to the value provide feedback, and allow users to iteratively approach the desired value by reacting to this feedback. For example:
- In a color picker, the user can zero in to the desired coordinates iteratively, by seeing the color change in real time
- In a video player, the user can drag the slider to the right time by getting feedback about video frames.
- In searches (e.g. for flights), dragging the slider updates the results in real time, allowing the user to gradually refine their search with suitable tradeoffs
In surveys, there is usually no feedback, which eliminates this core benefit.
When the number is known in advance, sliders are usually a poor choice, except when we have very few numbers to choose among (e.g. a 1-5 rating) and the slider UI makes it very clear where to click to select each of them, or we don’t much care about the number we select (e.g. search flights by departure time).[3] None of our demographics questions falls in this category (unless bracketed, in which case why not use regular brackets?).
There are several reasons for this:
- It is hard to predict where exactly to click to select the desired number. The denser the range, the harder it is.
- Even if you know where to click, it’s hard to do so on mobile
- Dragging a slider on desktop is generally slower than typing the number outright.[4]
<input type=number>
all the things?
Efficiency is not the only consideration here. Privacy is a big one. These surveys are anonoymous, but respondents are still often concerned about entering data they consider sensitive. Also, for the efficiency argument to hold true, the numerical answer needs to be top of mind, which is not always the case.
I summarize my recommendations below.
Age
This is a two digit number, that is always top of mind. Number input.
Years of experience
This is a 1-2 digit number, and it is either top of mind, or very close to it. Number input.
Company size
While most people know their rough company size, they very rarely would be able to provide an exact number without searching. This is a good candidate for brackets. However, the number of brackets should be reduced from the current 9 (does the difference between 2-5 and 6-10 employees really matter?), and their labels should be copyedited for scannability.
We should also take existing data into account. Looking at the State of CSS 2022 results for this question, it appears that about one third of respondents work at companies with 2-100 people, so we should probably not combine these 5 brackets into one, like I was planning to propose. 101 to 1000 employees is also the existing bracket with the most responses (15.1%), so we could narrow it a little, shifting some of its respondents to the previous bracket.
Taking all these factors into consideration, I proposed the following brackets:
- Just me!
- Small (2 - 50)
- Medium (51 - 200)
- Large (201 - 1000)
- Very Large (1000+)
Income
The question that started it all is unfortunately the hardest.
Income is a number that people know (or can approximate). It is faster to type, but only marginally (1.75s vs 1.5s). We can however reduce the keystrokes further (from 1.5s to 0.6s on average) by asking people to enter thousands.
The biggest concern here is privacy. Would people be comfortable sharing a more precise number? We could mitigate this somewhat by explicitly instructing respondents to round it further, e.g. to the nearest multiple of 10:
What is your approximate yearly income (before taxes)? Feel free to round to the nearest multiple of 10 if you are not comfortable sharing an exact number. If it varies by year, please enter an average.
However, this assumes that the privacy issues are about granularity, or about the number being too low (rounding to 10s could help with both). However, David Karger made an excellent point in the comments, that people at the higher income brackets may also be reluctant to share their income:
I don’t think that rounding off accomplishes anything. It’s not the least significant digit that people care about, but the most significant digit. This all depends on who they imagine will read the data of course. But consider some techy earnings, say 350k. That’s quite a generous salary and some people might be embarrassed to reveal their good fortune. Rounding it to 300k would still be embarrassing. On the other hand, a bracket like 150 to 500 would give them wiggle room to say that they’re earning a decent salary without revealing that they’re earning a fantastic one. I don’t have immediate insight into what brackets should be chosen to give people the cover they need to be truthful, but I think they will work better for this question.
Another idea was to offer UI that lets users indicate that the number they have entered is actually an upper or lower bound.
What is your approximate yearly income (before taxes)?
Of course, a dropdown PLUS a number input is much slower than using brackets, but if only a tiny fraction of respondents uses it, it does not affect the analysis of the average case.
However, after careful consideration and input, both qualitative and quantitative, it appears that privacy is a much bigger factor than I had previously realized. Even though I was aware that people see income level as sensitive data (more so in certain cultures than others), I had not fully realized the extent of this. In the end, I think the additional privacy afforded by brackets far outweighs any argument for efficiency or data analysis convenience.
Conclusion
I’m sure there is a lot of prior art on the general dilemma on numerical inputs vs brackets, but I wanted to do some analysis with the specifics of this case and outline an analytical framework for answering these kinds of dilemmas.
That said, if you know of any relevant prior art, please share it in the comments! Same if you can spot any flaws in my analysis or recommendations.
You could also check out the relevant discussion as there may be good points there.
https://www.sciencedirect.com/science/article/abs/pii/S0749596X19300786 ↩︎
KLM is a poor model for dragging tasks for two reasons: First, it regards dragging as simply a combination of three actions: button press, mouse move, button release. But we all know from experience that dragging is much harder than simply pointing, as managing two tasks simultaneously (holding down the mouse button and moving the pointer) is almost always harder than doing them sequentially. Second, it assumes that all pointing tasks have a fixed cost (1.1s), which may be acceptable for actual pointing tasks, but the inaccuracy is magnified for dragging tasks. A lot of HCI literature (and even NNGroup) refers to the Steering Law to estimate the time it takes to use a slider, however modern sliders (and scrollbars) do not require steering, as they are not constrained to a single axis: once dragging is initiated, moving the pointer in any direction adjusts the slider, until the mouse button is released. Fitts Law actually appears to be a better model here, and indeed there are many papers extending it to dragging. However, evaluating this research is out of scope for this post. ↩︎
Help Design the Inaugural State of HTML Survey!
You have likely participated in several Devographics surveys before, such as State of CSS, or State of JS. These surveys have become the primary source of unbiased data for the practices of front-end developers today (there is also the Web Almanac research, but because this studies what is actually used on the web, it takes a lot longer for changes in developer practices to propagate).
You may remember that last summer, Google sponsored me to be Survey Design Lead for State of CSS 2022. It went really well: we got 60% higher response rate than the year before, which gave browsers a lot of actionable data to prioritize their work. The feedback from these surveys is a prime input into the Interop project, where browsers collaborate to implement the most important features for developers interoperably.
So this summer, Google trusted me with a much bigger project, a brand new survey: State of HTML!
For some of you, a State of HTML survey may be the obvious next step, the remaining missing piece.
For others, the gap this is filling may not be as clear.
No, this is not about whether you prefer <div>
or <span>
!
It turns out, just like JavaScript and CSS, HTML is actually going through an evolution of its own!
New elements like <selectmenu>
and <breadcrumb>
are on the horizon, or cool new features like popovers and declarative Shadow DOM.
There are even JS APIs that are intrinsically tied to HTML, such as e.g. Imperative slot assignment
or DOM APIs like input.showPicker()
Historically, these did not fit in any of these surveys.
Some were previously asked in State of JS, some in State of CSS, but it was always a bit awkward.
This new survey aims to fill these gaps, and finish surveying the core technologies of the Web, which are HTML, CSS and JavaScript.
Designing a brand new survey is a more daunting task than creating the new edition of an existing survey, but also an exciting one, as comparability with the data from prior years is not a concern, so there is a lot more freedom.
Each State of X survey consists of two parts: Part 1 is a quiz: a long list of lesser-known and/or cutting-edge (or even upcoming) features where respondents select one of three options:
Starting with State of CSS 2022, respondents could also add freeform comments to provide more context about their answer through the little speech bubble icon.
One of my goals this year is to make this feature quicker to use for common types of feedback,
and to facilitate quantitative analysis of the responses (to some degree).
At the end of the survey, respondents even get a knowledge score based on their answers, which provides immediate value and motivation which reduces survey fatigue.
Part 2 is more freeform, and usually includes multiple-choice questions about tools and resources, freeform questions about pain points, and of course, demographics.
One of the novel things I tried in the 2022 State of CSS survey was to involve the community in the design process, with one-click voting for the features to ask about. These were actually GitHub Issues with certain labels. Two years prior I had released MaVoice: an app to facilitate one click voting on Issues in any repo, and it fit the bill perfectly here.
This process worked exceptionally well for uncovering blind spots: it turned out there were a bunch of CSS features that would be good to ask about, but were simply not on our radar. This is one of the reasons I strongly believe in transparency and co-design: no one human or small team can ever match the collective intelligence of the community.
Predictably, I plan to try the same approach for State of HTML. Instead of using MaVoice, this year I’m trying GitHub Discussions. These allow one click voting from the GitHub interface itself, without users having to authorize a separate app. They also allow for more discussion, and do not clutter Issues, which are better suited for – well – actual issues.
I have created a Discussions category for this and seeded it with 55 features spanning 12 focus areas (Forms & Editing, Making Web Components, Consuming Web Components, ARIA & Accessibility APIs, Embedding, Multimedia, Interactivity, Semantic HTML, Templating, Bridging the gap with native, Performance, Security & Privacy). These initial ideas and focus areas came from a combination of personal research, as well as several brainstorming sessions with the WebDX CG.
Vote on Features for State of HTML 2023!
You can also see a (read-only) summary of the proposed features with their metadata here though keep in mind that it’s manually updated so it may not not include new proposals.
If you can think of features we missed, please post a new Discussion in this category. There is also a more general 💬 State of HTML 2023 Design category, for meta-discussions on Part 1 of the survey, and design brainstorming on Part 2.
Note that the feedback period will be open for two weeks, until August 10th. After that point, feedback may still be taken into account, but it may be too late in the process to make a difference.
Some things to keep in mind when voting and generally participating in these discussions:
- The votes and proposals collected through this process are only one of the many variables that feed into deciding what to ask about, and are non-binding.
- There are two goals to balance here:
- The survey needs to provide value to developers – and be fun to fill in!
- The survey needs to provide value to browsers, i.e. get them actionable feedback they can use to help prioritize what to work on. This is the main way that these surveys have impact on the web platform, and is at least as important as (1).
- While the title is “State of HTML”, certain JS APIs or even CSS syntax is also relevant, especially those very close to HTML, such as DOM, ARIA, Web Components, PWAs etc.
- Stable features that have existed for a long time and are widely known are generally less likely to make it to the survey.
Going Lean
WordPress has been with me since my very first post in 2009. There is a lot to love about it: It’s open source, it has a thriving ecosystem, a beautiful default theme, and a revolutionary block editor that makes my inner UX geek giddy. Plus, WP made building a website and publishing content accessible to everyone. No wonder it’s the most popular CMS in the world, by a huge margin.
However, for me, the bad had started to outweigh the good:
- Things I could do in minutes in a static site, in WP required finding a plugin or tweaking PHP code.
- It was slow and bloated.
- Getting a draft out of it and into another medium was a pain.
- Despite having never been hacked, I was terrified about it, given all the horror stories.
- I was periodically getting “Error establishing a database connection” errors, whose frequency kept increasing.
It was time to move on. It’s not you WP, it’s me.
It seemed obvious that the next step would be a statically generated blog. I had been using Eleventy for a while on a variety of sites at that point and loved it, so using that was a no-brainer. In fact, my blog was one of my last remaining non-JAMstack sites, and by far the biggest. I had built a simple 11ty blog for my husband a year ago, and was almost jealous of the convenience and simplicity. There are so many conveniences that just come for free with this workflow: git, Markdown, custom components, even GitHub Copilot as you write your prose! And if you can make the repo public, oooooh, the possibilities! People could even file PRs and issues for your blog posts!
Using Netlify as a platform was also a no-brainer: I had been using it for years, for over 30 sites at this point! I love their simplicity, their focus on developer experience, and their commitment to open source. I also happen to know a bunch of folks there, and they have a great culture too.
However, I was dreading the amount of work it would take to migrate 14 years of content, plugins, and styling. The stroke that broke the camel’s back was a particularly bad db outage. I tweeted about my frustration, but I had already made up my mind.
I reviewed the list of plugins I had installed on WP to estimate the amount of work. Nearly all fell in one of two categories:
- Solving problems I wouldn’t have if I wasn’t using WP (e.g. SVG support, Don’t Muck My Markup)
- Giving me benefits I could get in 11ty with very little code (e.g. Prism syntax highlighting, Custom Body Class, Disqus, Unlist Posts & Pages, Widget CSS classes)
- Giving me benefits I could get with existing Eleventy plugins (e.g. Add Anchor Links, Easy Table of Contents)
This could actually work!
Public or private repo?
One of the hardest dilemmas was whether to make the repo for this website public or private.
Overall, I was happy to have most files be public, but there were a few things I wanted to keep private:
- Drafts (some drafts I’m ok to share publicly, but not all)
- Unlisted pages and posts (posts with publicly accessible URLs, but not linked from anywhere)
- Private pages (e.g. in the previous site I had a password-protected page with my details for conference organizers)
Unfortunately, right now it’s all-or-nothing, even if only one file needs to be private, the whole repo needs to be private.
Making the repo public does have many advantages:
- Transparency is one of my core values, and this is in line with it.
- People can learn from my code and avoid going down the same rabbit holes I did.
- People can file issues for problems.
- People can send PRs to fix both content and functionality.
- I wouldn’t need to use a separate public repo for the data that populates my Speaking, Publications, and Projects pages.
I went back and forth quite a lot, but in the end I decided to make it public. In fact, I fully embraced it, by making it as easy as possible to file issues and submit PRs.
Each page has a link to report a problem with it, which prefills as much info as possible. This was also a good excuse to try out GitHub Issue Forms, as well as URLs for prefilling the form!
Licensing
Note that a public repo is not automatically open source. As you know, I have a long track record of open sourcing my code. I love seeing people learning from it, using it in their own projects, and blogging about what they’ve learned. So, MIT-licensing the code part of this website is a no-brainer. CC-BY also seems like a no-brainer for content, because, why not?
Where it gets tricky is the design. I’m well aware that neither my logo nor the visual style of this website would win any design awards; I haven’t worked as a graphic designer for many years, and it shows. However, it’s something I feel is very personal to me, my own personal brand, which by definition needs to be unique. Seeing another website with the same logo and/or visual style would feel just as unsettling as walking into a house that looks exactly like mine. I’m speaking from experience: I’ve had my logo and design copied many times, and it always felt like a violation.
I’m not sure how to express this distinction in a GitHub LICENSE
file, so I haven’t yet added one,
but I did try to outline it in the Credits & Making Of page.
It’s still difficult to draw the line precisely, especially when it comes to CSS code. I’m basically happy for people to copy as much of my CSS code as they want (following MIT license rules), as long as the end result doesn’t scream “Lea Verou” to anyone who has seen this site. But how on Earth do you express that? 🤔
Migrating content to Markdown
The title of this section says “to Markdown” because that’s one of the benefits of this approach: static site generators are largely compatible with each other, so if I ever needed to migrate again, it would be much easier.
Thankfully, travelers on this road before me had already paved it. Many open source scripts out there to migrate WP to Markdown! The one that worked well for me was lonekorean/wordpress-export-to-markdown (though I later discovered there’s a more updated fork now)
It was still a bumpy road. First, it kept getting stuck on parsing the WP XML export, specifically in comments. I use Disqus for comments, but it mirrors comments in the internal WP system. Also, WP seems to continue recording trackbacks even if they are not displayed anywhere. Turns out I had hundreds of thousands of spam trackbacks, which I spent hours cleaning up (it was quite a meditative experience). In the end I got the total comments + trackbacks from 290K down to 26K which reduced the size of the XML export from 210 MB to a mere 31 MB. This did not fix the parsing issue, but allowed me to simply open the file in VS Code and delete the problematic comments manually. It also fixed the uptime issues I was having: I never got another “Error establishing a database connection” error after that, despite taking my own sweet time to migrate (started in April 2023, and finished in July!). Ideally, I wish WP had an option to export without comments, but I guess that’s not a common use case.
While this importer is great, and allowed me to configure the file structure in a way that preserved all my URLs, I did lose a few things:
- “Read more” separators (filed it as issue #93)
- Figures (they are imported as just images with text underneath) (filed it as issue #94)
- Drafts (#16)
- Pages (I had to manually copy them over, but it was only a handful)
- Any custom classes were gone (e.g. a
"view-demo"
class I used to create “call to action” links)
A few other issues:
- It downloaded all images, but did not update the URLs in the Markdown files.
This was easy to fix with a regex find and replace from
https?://lea.verou.me/wp-content/uploads/(\d{4}/\d{2})/([\w\.-]+\.(?:png|gif|jpe?g))
toimages/$2
. - Some images from some posts were not downloaded – I still have no idea why.
- It did not download any non-media uploads, e.g. zip files. Thankfully, these were only a couple, so I could detect and port over manually.
- Older posts included code directly in the content, without code blocks, which meant it was being parsed as HTML, often with disastrous results (e.g. the post just cutting off in the middle of a sentence because it mentioned
<script>
, which opened an actual<script>
element and ate up the rest of the content). I fixed a few manually, but I’m sure there’s more left. - Because code was just included as content, the importer also escaped all Markdown special symbols, so adding code blocks around it was not enough, I also had to remove a bunch of backslashes manually.
Rethinking Categorization
While the importer preserved both tags and categories, this was a good opportunity to rethink whether I need them both, and to re-evaluate how I use them.
This spun off into a separate post: Rethinking Categorization.
Migrating comments
Probably one of the hardest parts of this migration was preserving Disqus comments. In fact, it was so hard that I procrastinated on it for three months, being stuck in a limbo where I couldn’t blog because I’d have to port the new post manually.
I’ve documented the process in a separate blog post, as it was quite involved, including some thoughts about what system to use in the future, as I eventually hope to migrate away from Disqus.
Keeping URLs cool
I wanted to preserve the URL structure of my old site as much as possible, both for SEO, but also because cool URLs don’t change.
The WP importer I used allowed me to preserve the /year/month/slug
structure of my URLs.
I did want to have the blog in its own directory though.
This site started as a blog, but I now see it as more of a personal site with a blog.
Thankfully, redirecting these URLs to corresponding /blog/
URLs was a one liner using Netlify redirects:
/20* /blog/20:splat 301
Going forwards, I also decided to do away with the month being part of the URL, as it complicates the file structure for no discernible benefit and I don’t blog nearly as much now as I did in 2009, e.g. compare 2009 vs 2022: 38 vs 7! I do think I will start blogging more again now, not only due to the new site, but also due to new interests and a long backlog of ideas (just look at July 2023 so far!). However, I doubt I will ever get back to the pre-2014 levels, I simply don’t have that kind of time anymore (coincidentally, it appears my blogging frequency dropped significantly after I started my PhD).
I also wanted to continue having nice, RESTful, usable URLs, which also requires:
URLs that are “hackable” to allow users to move to higher levels of the information architecture by hacking off the end of the URL
In practice, this means it’s not enough if tags/foo/
shows all posts tagged “foo”, tags/
should also show all tags.
Similarly, it’s not enough if /blog/2023/04/private-fields-considered-harmful/
links to the corresponding blog post,
but also:
/blog/2023/04/
should show all posts from April 2023/blog/2023/
should show all posts from 2023- and of course
/blog/
should show all posts
This proved quite tricky to do with Eleventy, and spanned an entirely different blog post.
Overall impressions
Overall, I’m happy with the result, and the flexibility. I’ve had a lot of fun with this project, and it was a great distraction during a very difficult time in my life, due to dealing with some serious health issues in my immediate family.
However, there are a few things that are now more of a hassle than they were in WP, mainly around the editing flow:
- In WP, editing a blog post I was looking at in my browser was a single click (provided I was logged in). I guess I could still do that by editing through GitHub, but now I’m spoiled, I want an easy way to edit in my own editor (VS Code, which has a lot of nice features for Markdown editing), however the only way to do that is to either painfully traverse the directory structure, or …search to find the right *.md file, neither of which is ideal.
- Previewing a post I was editing was also a single click, whereas now I need to run a local server and manually type the URL in (or browse the website to find it).
- Making edits now requires me to think of a suitable commit message. Sure, this is useful sometimes, but most of the time, I want the convenience of just saving my changes and being done with it.
Open file in VS Code from the browser?
There is a way to solve the first problem: VS Code supports a vscode://
protocol that allows you to
open a file in VS Code from the browser.
This means, this link would open the file for this blog post in VS Code:
<a href="vscode://file/Users/leaverou/Documents/lea.verou.me/blog/2023/going-lean/index.md">Edit in VS Code</a>
See the issue? I cannot add a button to the UI that only works for me!
However, I don’t need to add a button to the UI:
as long as I expose the input path of the current page (Eleventy’s page.inputPath
) in the HTML somehow,
I can just add a bookmarklet to my own browser that just does:
location.href = `vscode://file/Users/leaverou/Documents/lea.verou.me/${document.documentElement.dataset.inputpath}`;
In fact, here it is, ready to be dragged to the bookmarks bar: Edit in VS Code
Now, if only I could find a way to do the opposite: open the localhost URL that corresponds to the Markdown file I’m editing — and my workflow would be complete!
What’s next?
Obviously, there’s a lot of work left to do, and I bet people will find a lot more breakage than I had noticed. I also have a backlog of blog post ideas that I can’t wait to write about.
But I’ve also been toying around with the idea of porting over my personal (non-tech) blog posts, and keep them in an entirely separate section of the website. I don’t like that my content is currently hostage to Tumblr (2012-2013) and Medium (2017-2021), and would love to own it too, though I’m a bit concerned that properly separating the two would take a lot of work.
Anyhow, 'nuff said. Ship it, squirrel! 🚢🐿️
Rethinking Categorization
This is the third spinoff post in the migration saga of this blog from WordPress to 11ty.
Migrating was a good opportunity to rethink the information architecture of my site, especially around categorization.
Categories vs Tags
Just like most WP users, I was using both categories and tags, simply because they came for free. However the difference between them was a bit fuzzy, as evidenced by how inconsistently they are used, both here and around the Web. I was mainly using Categories for the type of article (Articles, Rants, Releases, Tips, Tutorials, News, Thoughts), however there were also categories that were more like content tags (e.g. CSS WG, Original, Speaking, Benchmarks).
This was easily solved by moving the latter to actual tags. However, tags are no panacea, there are several issues with them as well.
Problems with tags
Tag aliases
First, there were many tags that were synonyms of each other, and posts were fragmented across them, or had to include both (e.g. JS and Javascript). I addressed this by defining aliases in a global data file, and using Eleventy to dynamically build Netlify redirects for them.
# Tag aliases
{% for alias, tag in tag_aliases %}/tags/{{ alias }}/ /tags/{{ tag }}/ 301
{% endfor %}
Turns out I’m not the first to think of building the Netlify _redirects
file dynamically, some googling revealed this blog post from 2021 that does the same thing.
I’ve also decided to expose these aliases in the tags index:
“Orphan” tags
Lastly, another issue is what I call “orphan tags”: Tags that are only used in a single post. The primary use case for both tags and categories is to help you discover related content. Tags that are only used once clutter the list of tags, but serve no actual purpose.
It is important to note that orphan tags are not (always) an authoring mistake. While some tags are definitely too specific and thus unlikely to be used again, the vast majority of orphan tags are tags that could plausibly be used again, but it simply hasn’t happened.
I definitely removed a bunch of overly specific tags from the content, but was still left with more orphan tags than tags with more than one post (103 vs 78 as I write these lines).
For (1), the best course of action is probably to remove the tags from the content altogether. However for (2), there are two things to consider.
How to best display orphan tags in the tag index?
For the tag index, I’ve separated orphan tags from the rest,
and I’m displaying them in a <details>
element at the end, that is collapsed by default.
Each tag is a link to the post that uses it instead of a tags page, since there is only one post that uses it.
How to best display orphan tags in the post itself?
This is a little trickier. For now, I’ve refrained from making them links, and I’m displaying them faded out to communicate this.
Another alternative I’m contemplating is to hide them entirely. Not as a punitive measure because they have failed at their one purpose in life 😅, but because this would allow me to use tags liberally, and only what sticks would be displayed to the end user.
A third, intermediate solution, would be to have a “and 4 orphan tags” message at the end of the list of tags, which can be clicked to show them.
These are not just UX/IA improvements, they are also performance improvements. Not linking orphan tags to tag pages means I don’t need to generate these tag pages at all. Since the majority of tags are orphan tags, this allowed me to substantially reduce the number of pages that need to be generated, and cut down build time by a whopping 40%, from 2.7s to 1.7s (on average).
Tag hierarchies?
The theory is that categories are a taxonomy and tags a folksonomy. Taxonomies can be hierarchical, but folksonomies are, by definition, flat. However, in practice, tags almost always have an implicit hierarchy, which is also what research on folksonomies in the wild tends to find.
Examples from this very blog:
- There is a separate tag for ES (ECMAScript), and a separate one for JS. However, any post tagged ES should also be tagged JS – though the opposite is not true.
- There is a tag for CSS, tags for specific CSS specifications (e.g. CSS Backgrounds & Borders), and even tags for specific CSS functions or properties (e.g.
background-attachment
,background-size
). However, these are not orthogonal: posts tagged with specific CSS features should also be tagged with the CSS spec that contains them, as well as a general “CSS” tag.
I have yet to see a use case for tagging that does not result in implicit hierarchies. Yet, all UIs for entering tags assume that they are flat. Instead, it’s up to each individual post to maintain these relationships, which is tedious and error prone. In practice, the more general tags are often left out, but not intentionally or predictably.
It would be much better to be able to define this hierarchy in a central place, and have it automatically applied to all posts. In 11ty, it could be as simple as a data file for each tag’s “parent” tag. Every time the tag is used, its parent is also added to the post automatically, recursively all the way up to the root (at build time). I have not tried this yet, but I’m excited to experiment with it once I have a bit more time.
Categories vs Tags: Reprise
Back to our original dilemma: Do I still need categories, especially if I eventually implement tag hierarchies? It does seem that the categories I used in WP for the article type (Articles, Rants, Releases, Tips, Tutorials, News, Thoughts etc) are somewhat distinct from my usage of tags, which are more about the content of the article. However, it is unclear whether this is the best use of categories, or whether I should just use tags for this as well. Another common practice is to use tags for more specific content tags, and categories for broader areas (e.g. “Software engineering”, “Product”, “HCI”, “Personal” etc). Skipping past the point that tag hierarchies make it easy to use tags for this too, this makes me think: maybe what is needed is actually metadata, not categories. Instead of deciding that categories hold the article type, or the broader domain, what if we had certain attributes for both of these things. Then, we could have a “type” attribute, and a “domain” attribute, and use them both for categorization, and for filtering. Since Eleventy already supports arbitrary metadata, this is just a matter of implementation.
Lots to think about, but one thing seems clear: Categories do not have a clear purpose, and thus I’m doing away with them. For now, I have converted all past categories to tags, so that the additional metadata is not lost, and I will revisit how to best expose this metadata in the future.