Reading List
The most recent articles from a list of feeds I subscribe to.
The web doesn’t have version numbers
Like ‘Web 2.0’, ‘web3’ is a marketing term. There is no versioning system for the web. There are also lots of exciting uses for the web and problems with it outside the realm of ‘web3’.
It’s the later half of the 2000s. People had been building websites and online services for a while. Suddenly, everyone started using the phrase ‘Web 2.0’. It didn’t have one one clear definition, like ‘carrot’ or ‘dentist’. It referred to a bunch of things at once. Websites with ‘user generated content’, users tagging their data, software as a service, async JavaScript, widgets and open (!) APIs that allowed for mashups: sites that displayed content from other sites. New companies had names with less vowels, like Tumblr and Flickr, and there was RSS on everything. ‘You’ was made TIME Person of the Year. I’ve been returning to some stuff from that time, and it’s been interesting.
Many of the things that fit the above description of ‘Web 2.0’ were useful, they often still are on today’s web. We lost some, we kept some. But if we’re fair, ‘Web 2.0’ wasn’t some new iteration, a new version of something that was different before. It was largely reuse of existing web tech, like HTTP and XML. Exciting reuse, for sure. But a lot of it already existed in non-commerical forms before the phrase ‘Web 2.0’. Not everyone knows, but the first web browser, WorldWideWeb, was meant to be both viewer and editor. That’s quite ‘user generated’, I would say. So, what did Web 2.0 mean? ‘Web 2.0 is, of course, a piece of jargon, nobody even knows what it means’, Sir Tim Berners-Lee commented in an interview with IBM at the time.
Like ‘Web 2.0’, ‘web3’, and I’m not sure what’s with the removed space and dot, or the lowercase ‘w’, is just a marketing phrase. Definitions of ‘web3’ seem to be all over the place. From what I gathered, it is a vision of a web that runs largely on the blockchain, in order to make ‘owning’ assets better for people who create them and people who purchase them, by cutting out middlemen. This vision is not to be confused with the Semantic Web, which was also called Web 3.0, and discussed years before (see an article from 2006).
Here’s the thing. There is no institution that regularly releases new versions of the web, and recently happily announced this one. Instead, the phrase ‘web3’ was coined in 2014 by a co-inventor of a blockchain technology and since used by crypto asset enthusiasts and certain venture capitalist firms, for what is, some argue, close to ponzi schemes and, in its current form, very environment unfriendly. It also puts vulnerable people at risk (see Molly White ‘s web3 is going great for examples of those claims).
I’ll keep my issues with ‘web3’ for a later post, for now I just wanted to make the point that it’s unfair to claim a version number for the web for a specific set of innovations you happen to like. There are many ways the web evolves. Sometimes they involve the kinds of technology that ‘web3’ adepts use, but usually they don’t. These are some web innovations I like:
- New standards are written to harmonise how web tech works and invent new tech responsibly and across user agents, like CSS Grid Layout, WebAssembly and map/filter/reduce in JavaScript
- New companies start useful services, like payment integration services and ‘content as data’ headless CMSes
- Individuals start blogging on their personal sites, on which they own their content
- Governments and organisations roll out reliable, useful and accessible authentication services (like DigiD)
- Video conferencing companies bring their software to the browser
- Adobe brought Photoshop to the browser
- A couple of Dutch museums put all their entire catalogue of art online (eg Rijksmuseum, Stedelijk and Van Gogh Museum)
Maybe that list is a bit random, you probably have a list of your own. Many of these things are working just fine. I could personally go on and on about some very useful plans for the web. There are also lots of unsolved problems, like lack of web accessibility or Facebook’s business model. Cool things are planned for the web all the time and there are lots of problems that aren’t yet addressed. Most of the web is fine. At the same time, there are also plans and problems. Frankly, I don’t think we should use version numbers just to market a specific subset of plans and problems for the web. Especially not if that’s such a controversial subset.
Originally posted as The web doesn’t have version numbers on Hidde's blog.
Twitter needs manual language selection
Lots of Twitterers speak languages that are not English. For people who read tweets that are not in English, it is important that these tweets are marked as such. I feel Twitter needs a feature for this.
It would be nice if, when writing a tweet, we could manually select which language the tweet is in, and that Twitter would use that information to set the appropriate lang
attribute on our content:
Sharing a controversial opinion on CSS frameworks in the Dutch language
Twitter is an authoring tool, for which the Authoring Tool Accessibility Guidelines recommend that “accessible content production is possible” (Guideline B.1.2).
The lang
attribute
Language attributes identify which language some web content is in. They are usually set on a page level, added to the HTML element:
<html lang="en">
Most developers don’t write these attributes often, the code often lives somewhere in a template that we don’t touch every day, or ever. But it’s an important attribute. Setting it correctly gets your page to pass one whole WCAG criterion (3.1.1 Language of page).
In some cases, we have to set language attributes on individual elements, too, like if some of our content is not in the page’s main language. On the website I built for the British-Taiwanese band Transition, we combine content in Mandarin with content in English on one page:
The Transition “Music” page
We picked en
as the main language and set it on the <html>
element. This meant we had to mark all Chinese content as zh
, in this case zh-TW
as it is specifically Mandarin as spoken in Taiwan. Of course, we could have written this the other way around, too. Usually we want to pick the language that’s most common on the page as the page’s language.
Setting a lang
attribute on parts of a page is its own WCAG criterion, too (3.1.2 Language of parts), by the way.
The user need
Setting the language is important for end users, like:
- people who use a screenreader to read out content on a page
- people who use a braille display
- people who end up seeing a default font (browsers can select these based on language)
- people who use software to translate content
- people who want to right click a word in our content to look it up in a dictionary
- people who use user stylesheets
The author need
There is also an author need, both for people who write content and for web developers.
Content editors
People who write content may get browser-provided spellcheckers. They will work better if they know what the content’s language is. I think Twitter.com has somehow turned browser spellcheck off, but there may be Twitter clients or indeed other authoring tools where this is relevant.
Web developers
Language attributes are important for web developers, too, as it allows them to use the :lang()
pseudo class in CSS more effectively.
Some CSS will behave differently based on languages. When you use hyphens: auto
, the browser needs to look up words in a dictionary to apply hyphenation correctly. It has to know the language for this.
With appropriate language attributes, you can also use CSS features like writing modes and typographic properties more effectively. See Hui Jing Chen’s deep dive into CSS for internationalisation for more details.
Automating and lang-maybe
Identifying languages can be automated. In fact, Twitter does this. When they recognise a tweet’s language, they add the relevant lang
attribute proactively. See for instance the European Commission chair’s multilingual tweets:
Twitter’s auto-added lang
attributes in action
Yay! I think this is very cool (thanks ThainBBdl for pointing this out). The advances in natural language processing are really impressive.
Having said that, any automated system makes mistakes. Vadim Makeev shared:
Yes, sometimes they take my Russian tweets and render them as Bulgarian. It’s not just the lang, they also use some Cyrillic font variation that makes them harder to read.
It is safe to assume such mistakes will skew towards minority languages and miss subtleties that matter a lot to individual people, especially in areas where language is political.
On the one hand, I think it makes sense to deploy automated language identification. As there are a lot of users, Twitter can safely assume not everyone would set a language for all of their tweets. People might not know or care (insert sad face here), a fallback helps with that. On the other hand, if this tech exists, might it make more sense if a browser would deploy it rather than an individual website? Why not have the browser guess the content’s language, for every website and not just Twitter?
If browsers would do this, Twitter’s lang
attributes may get in the way. They kind of give the impression that this information is author-provided. This makes me wonder, should there be a way for Twitter to say their declaration is a guess? lang-maybe
?
Manual selection
Automated language detection probably works best if it complements manual selection. It could help provide a default choice or suggestion for manual selection, and work as a fallback. So, I’m still going to make the case for a method for users to specify a language manually.
A per-tweet manual language picker would be great as it can:
- give willing authors more control to avoid issues
- avoid that language identification benefits are only had by users of the majority languages that AI models are best trained for
- let authors express their specific intent
Summing up
For non-English tweets to meet WCAG, they need to have their language declared with a lang
atttribute. Twitter currently guesses languages, which is a great step in the right direction, but is likely of little help to speakers of minority languages. A manual selector would be a great way to complement the automation.
The post Twitter needs manual language selection was first posted on hiddedevries.nl blog | Reply via email
Twitter needs manual language selection
Lots of Twitterers speak languages that are not English. For people who read tweets that are not in English, it is important that these tweets are marked as such. I feel Twitter needs a feature for this.
It would be nice if, when writing a tweet, we could manually select which language the tweet is in, and that Twitter would use that information to set the appropriate lang
attribute on our content:
Sharing a controversial opinion on CSS frameworks in the Dutch language
Twitter is an authoring tool, for which the Authoring Tool Accessibility Guidelines recommend that “accessible content production is possible” (Guideline B.1.2).
The lang
attribute
Language attributes identify which language some web content is in. They are usually set on a page level, added to the HTML element:
Most developers don’t write these attributes often, the code often lives somewhere in a template that we don’t touch every day, or ever. But it’s an important attribute. Setting it correctly gets your page to pass one whole WCAG criterion (3.1.1 Language of page).
In some cases, we have to set language attributes on individual elements, too, like if some of our content is not in the page’s main language. On the website I built for the British-Taiwanese band Transition, we combine content in Mandarin with content in English on one page:
The Transition “Music” page
We picked en
as the main language and set it on the <html>
element. This meant we had to mark all Chinese content as zh
, in this case zh-TW
as it is specifically Mandarin as spoken in Taiwan. Of course, we could have written this the other way around, too. Usually we want to pick the language that’s most common on the page as the page’s language.
Setting a lang
attribute on parts of a page is its own WCAG criterion, too (3.1.2 Language of parts), by the way.
The user need
Setting the language is important for end users, like:
- people who use a screenreader to read out content on a page
- people who use a braille display
- people who end up seeing a default font (browsers can select these based on language)
- people who use software to translate content
- people who want to right click a word in our content to look it up in a dictionary
- people who use user stylesheets
The author need
There is also an author need, both for people who write content and for web developers.
Content editors
People who write content may get browser-provided spellcheckers. They will work better if they know what the content’s language is. I think Twitter.com has somehow turned browser spellcheck off, but there may be Twitter clients or indeed other authoring tools where this is relevant.
Web developers
Language attributes are important for web developers, too, as it allows them to use the :lang()
pseudo class in CSS more effectively.
Some CSS will behave differently based on languages. When you use hyphens: auto
, the browser needs to look up words in a dictionary to apply hyphenation correctly. It has to know the language for this.
With appropriate language attributes, you can also use CSS features like writing modes and typographic properties more effectively. See Hui Jing Chen’s deep dive into CSS for internationalisation for more details.
Automating and lang-maybe
Identifying languages can be automated. In fact, Twitter does this. When they recognise a tweet’s language, they add the relevant lang
attribute proactively. See for instance the European Commission chair’s multilingual tweets:
Twitter’s auto-added lang
attributes in action
Yay! I think this is very cool (thanks ThainBBdl for pointing this out). The advances in natural language processing are really impressive.
Having said that, any automated system makes mistakes. Vadim Makeev shared:
Yes, sometimes they take my Russian tweets and render them as Bulgarian. It’s not just the lang, they also use some Cyrillic font variation that makes them harder to read.
It is safe to assume such mistakes will skew towards minority languages and miss subtleties that matter a lot to individual people, especially in areas where language is political.
On the one hand, I think it makes sense to deploy automated language identification. As there are a lot of users, Twitter can safely assume not everyone would set a language for all of their tweets. People might not know or care (insert sad face here), a fallback helps with that. On the other hand, if this tech exists, might it make more sense if a browser would deploy it rather than an individual website? Why not have the browser guess the content’s language, for every website and not just Twitter?
If browsers would do this, Twitter’s lang
attributes may get in the way. They kind of give the impression that this information is author-provided. This makes me wonder, should there be a way for Twitter to say their declaration is a guess? lang-maybe
?
Manual selection
Automated language detection probably works best if it complements manual selection. It could help provide a default choice or suggestion for manual selection, and work as a fallback. So, I’m still going to make the case for a method for users to specify a language manually.
A per-tweet manual language picker would be great as it can:
- give willing authors more control to avoid issues
- avoid that language identification benefits are only had by users of the majority languages that AI models are best trained for
- let authors express their specific intent
Summing up
For non-English tweets to meet WCAG, they need to have their language declared with a lang
atttribute. Twitter currently guesses languages, which is a great step in the right direction, but is likely of little help to speakers of minority languages. A manual selector would be a great way to complement the automation.
Originally posted as Twitter needs manual language selection on Hidde's blog.
2021 in review
Well, guess what, we’re nearing the end of the year! In this post I’ll share what I was up to this year, as well as some things I learned along the way.
As usual, my year wasn’t all highlights… like in 2020, 2019, 2018 and 2017, the lowlights are intentionally left out of this public post. Ok, let’s go!
Highlights
Work
Throughout the year, for W3C/WAI, I was involved in the launch of the redesigned WCAG-EM Report Tool, worked on a unified layout for the W3C’s accessibility guidance (like Techniques, Understanding and the ARIA Authoring Practices Guide) and did some work around accessibility of authoring tools in higher education, Epub and XR.
This year also was my last at the W3C/WAI. I won’t get into too much detail, but I can say I struggled with the leadership style and decision making process. I had wanted to drive more change from within. I had also wanted to do more to make accessibility more accessible, but the standards game seems easier to play for folks who have been in it longer. When my contract neared the end I decided not to extend. I am grateful for the opportunity though, and was able to learn a lot about standards, web accessibility and the web, and contribute to a wide range of interesting projects.
Besides my W3C work, I also did over 25 WCAG conformance audits, mostly for governments, and a couple of in-house workshops, including two for the teams building the Dutch national citizen’s authentication system (DigiD).
I also worked with Eleven Ways on an advisory project for the European Commission and a project on virtual event accessibility. Lastly, I just started a short stint with Mozilla, to help with some accessibility aspects of the upcoming MDN redesign.
In February, I will start a new job 🎉, which I am super stoked about. Like my role at the W3C, it focuses on outreach to developers and will let me continue to work on making it easier for developers to build things. I am excited, because it is at a product company that works on solving a very real and interesting problem, and it is very authoring tool related.
Speaking
This year included one in person event and I was so happy the covid-shaped stars aligned this time for me. I had recently had my vaccin, the European winter wave had not yet exploded and 2G applied.
The other talks I did were all remote:
- Accelerating accessibility in a component based world at SimpleWebConf (June), JSConf India (November), BLD Conf (November) and Git Commit Show (November)
- Could browsers fix more accessibility problems automatically at a11yTO (October), Accessibility Club Meetup (November) and Tech A11y Summit
- More to give than just the div: semantics and how to get them right at Web Directions: Access All Areas (October) and Beyond Tellerrand (November)
- Procuring accessible software for e-learning: how ATAG can help with my colleague Joshue O’Connor at WP Campus Online (September)
You can like and subscribe this stuff on YouTube should you wish to.
I also joined the Gebruiker Centraal (“User Centered”) podcast of the Dutch government (WCAG What?) and Ben Meyer’s Some Antics livestream (Audit Site for Accessibility).
Reading
I read 54 books this year. I created a website for this as I love judging books by their covers, now you can too on books.hiddedevries.nl. This is updated manually—if you want real time updates, add me on Goodreads.
Below are some books I particularly liked. They have in common that they are set in parts of the world I have travelled to and would have liked to return to if it wasn’t for the pandemic.
- If I had your face by Frances Cha – this is about four women in Seoul and their lives, obsessions and friendships.
- Fake accounts by Lauren Oyler – what if you find out your partner is actually a complot theorist with a secret Instagram account? Funny and smart satire of contemporary life, partially set in Berlin. Got very mixed comments on Goodreads.
- Intimacies by Katie Kitamura – this book’s main character works at the International Criminal Court in The Hague—ok I have travelled there during the pandemic… it uses precise language, but also shows how much precision of language matters. There was a scene in which people buy old books just for the purpose of decorating their house.
- Kafka on the shore by Haruki Murakami – I reread Kafka on the shore, set in Yamanashi and Takamatsu, Japan, and, like a lot of Murakami books, it involves a road trip, lots of music and lots of cats
- Free food for millionaires by Min Jin Lee – set in New York, describes a college graduate plotting her future, describing both her generation and her parents, jumping between life’s different environments
Writing
I didn’t write a lot, but have not stopped posting either—this is the 20th post of this year. Things I wrote about included:
- the “metaverse” and why real life is better – I guess I am sceptical of a lot of the ideas posed by Zuckerberg in this year’s Facebook event
- numbers – I get asked a lot how many people have disabilities and I feel that’s the wrong metric
- “normative’ in WCAG – a lot of my work this year focused around redesigning WCAG-related guidance for W3C/WAI, and part of my research showed users struggle with deciding which guidance is “required” to meet WCAG (spoiler: most of WCAG’s main text is, most other stuff isn’t)
- components and what to look for – I love component systems and feel strongly they can contribute to a more accessible web. However, the opportunity they provide is neutral: if components contain barriers, they could make the web less accessible too, so we want to use components that repeat accessibility, not inaccessibility
- I would love accessibility statements in App Stores and wrote about that
Cities
I stayed within the borders of The Netherlands for another year, with one exception: a short train ride across the Netherlands-Germany border for the first Beyond Tellerrand in 2 years.
Things I learned
Towards the end of the year I try to think about what I learned. A lot of this year’s was quite specific to myself, but these are some random learnings that might interest others:
- One aspect that makes accessibility of XR tricky is that virtual worlds usually don’t have DOM nodes like web pages do. Accessibility trees are based on DOM nodes, without those XR worlds will need some other mechanism to define accessibility meta information (more about XR Accessibility User Requirements).
- There is a standard for Epub accessibility, but none specifically for native app accessibility, eventhough some organisations in European Union Member States need their apps to be accessible since earlier this year.
- I was late to the Eleventy party, but eh, Eleventy is nice. I released two Eleventy-based projects: my books site and a starter pack for WCAG reporting using Eleventy.
- Open sourcing a thing can make it better! Months after I released my Eleventy WCAG Reporting thingy, friendly folks contributed translations into Portuguese, Finnish, German and Spanish. This is onlly a super small project, but it was nicer than not releasing it could have ever been.
- Standardising design system components, as in, make common design system needs somehow part of the web platform, is fun and useful. It could mean the world to accessibility and developer experience. It’s also hard. I am trying to contribute to some of this through Open UI CG and I learn lots in the meetings and issues.
- I found it helpful to apply at multiple companies when I looked for my next role. I wasn’t sure if I wanted to, at first, but was glad I did, as it let me talk to many different companies and gave me a lot more decision points to compare towards the end of the process.
Anyway, thanks so much for reading my blog this year and sharing posts with others. It means a lot to me. I wish you the best for 2022!
The post 2021 in review was first posted on hiddedevries.nl blog | Reply via email
2021 in review
Well, guess what, we’re nearing the end of the year! In this post I’ll share what I was up to this year, as well as some things I learned along the way.
As usual, my year wasn’t all highlights… like in 2020, 2019, 2018 and 2017, the lowlights are intentionally left out of this public post. Ok, let’s go!
Highlights
Work
Throughout the year, for W3C/WAI, I was involved in the launch of the redesigned WCAG-EM Report Tool, worked on a unified layout for the W3C’s accessibility guidance (like Techniques, Understanding and the ARIA Authoring Practices Guide) and did some work around accessibility of authoring tools in higher education, Epub and XR.
This year also was my last at the W3C/WAI. I won’t get into too much detail, but I can say I struggled with the leadership style and decision making process. I had wanted to drive more change from within. I had also wanted to do more to make accessibility more accessible, but the standards game seems easier to play for folks who have been in it longer. When my contract neared the end I decided not to extend. I am grateful for the opportunity though, and was able to learn a lot about standards, web accessibility and the web, and contribute to a wide range of interesting projects.
Besides my W3C work, I also did over 25 WCAG conformance audits, mostly for governments, and a couple of in-house workshops, including two for the teams building the Dutch national citizen’s authentication system (DigiD).
I also worked with Eleven Ways on an advisory project for the European Commission and a project on virtual event accessibility. Lastly, I just started a short stint with Mozilla, to help with some accessibility aspects of the upcoming MDN redesign.
In February, I will start a new job 🎉, which I am super stoked about. Like my role at the W3C, it focuses on outreach to developers and will let me continue to work on making it easier for developers to build things. I am excited, because it is at a product company that works on solving a very real and interesting problem, and it is very authoring tool related.
Speaking
This year included one in person event and I was so happy the covid-shaped stars aligned this time for me. I had recently had my vaccin, the European winter wave had not yet exploded and 2G applied.
The other talks I did were all remote:
- Accelerating accessibility in a component based world at SimpleWebConf (June), JSConf India (November), BLD Conf (November) and Git Commit Show (November)
- Could browsers fix more accessibility problems automatically at a11yTO (October), Accessibility Club Meetup (November) and Tech A11y Summit
- More to give than just the div: semantics and how to get them right at Web Directions: Access All Areas (October) and Beyond Tellerrand (November)
- Procuring accessible software for e-learning: how ATAG can help with my colleague Joshue O’Connor at WP Campus Online (September)
You can like and subscribe this stuff on YouTube should you wish to.
I also joined the Gebruiker Centraal (“User Centered”) podcast of the Dutch government (WCAG What?) and Ben Meyer’s Some Antics livestream (Audit Site for Accessibility).
Reading
I read 54 books this year. I created a website for this as I love judging books by their covers, now you can too on books.hiddedevries.nl. This is updated manually—if you want real time updates, add me on Goodreads.
Below are some books I particularly liked. They have in common that they are set in parts of the world I have travelled to and would have liked to return to if it wasn’t for the pandemic.
- If I had your face by Frances Cha – this is about four women in Seoul and their lives, obsessions and friendships.
- Fake accounts by Lauren Oyler – what if you find out your partner is actually a complot theorist with a secret Instagram account? Funny and smart satire of contemporary life, partially set in Berlin. Got very mixed comments on Goodreads.
- Intimacies by Katie Kitamura – this book’s main character works at the International Criminal Court in The Hague—ok I have travelled there during the pandemic… it uses precise language, but also shows how much precision of language matters. There was a scene in which people buy old books just for the purpose of decorating their house.
- Kafka on the shore by Haruki Murakami – I reread Kafka on the shore, set in Yamanashi and Takamatsu, Japan, and, like a lot of Murakami books, it involves a road trip, lots of music and lots of cats
- Free food for millionaires by Min Jin Lee – set in New York, describes a college graduate plotting her future, describing both her generation and her parents, jumping between life’s different environments
Writing
I didn’t write a lot, but have not stopped posting either—this is the 20th post of this year. Things I wrote about included:
- the “metaverse” and why real life is better – I guess I am sceptical of a lot of the ideas posed by Zuckerberg in this year’s Facebook event
- numbers – I get asked a lot how many people have disabilities and I feel that’s the wrong metric
- “normative’ in WCAG – a lot of my work this year focused around redesigning WCAG-related guidance for W3C/WAI, and part of my research showed users struggle with deciding which guidance is “required” to meet WCAG (spoiler: most of WCAG’s main text is, most other stuff isn’t)
- components and what to look for – I love component systems and feel strongly they can contribute to a more accessible web. However, the opportunity they provide is neutral: if components contain barriers, they could make the web less accessible too, so we want to use components that repeat accessibility, not inaccessibility
- I would love accessibility statements in App Stores and wrote about that
Cities
I stayed within the borders of The Netherlands for another year, with one exception: a short train ride across the Netherlands-Germany border for the first Beyond Tellerrand in 2 years.
Things I learned
Towards the end of the year I try to think about what I learned. A lot of this year’s was quite specific to myself, but these are some random learnings that might interest others:
- One aspect that makes accessibility of XR tricky is that virtual worlds usually don’t have DOM nodes like web pages do. Accessibility trees are based on DOM nodes, without those XR worlds will need some other mechanism to define accessibility meta information (more about XR Accessibility User Requirements).
- There is a standard for Epub accessibility, but none specifically for native app accessibility, eventhough some organisations in European Union Member States need their apps to be accessible since earlier this year.
- I was late to the Eleventy party, but eh, Eleventy is nice. I released two Eleventy-based projects: my books site and a starter pack for WCAG reporting using Eleventy.
- Open sourcing a thing can make it better! Months after I released my Eleventy WCAG Reporting thingy, friendly folks contributed translations into Portuguese, Finnish, German and Spanish. This is onlly a super small project, but it was nicer than not releasing it could have ever been.
- Standardising design system components, as in, make common design system needs somehow part of the web platform, is fun and useful. It could mean the world to accessibility and developer experience. It’s also hard. I am trying to contribute to some of this through Open UI CG and I learn lots in the meetings and issues.
- I found it helpful to apply at multiple companies when I looked for my next role. I wasn’t sure if I wanted to, at first, but was glad I did, as it let me talk to many different companies and gave me a lot more decision points to compare towards the end of the process.
Anyway, thanks so much for reading my blog this year and sharing posts with others. It means a lot to me. I wish you the best for 2022!
Originally posted as 2021 in review on Hidde's blog.