Reading List
The most recent articles from a list of feeds I subscribe to.
Naming things to improve accessibility
One thing you can do to improve the accessibility of your work is to always ensure things have accessible names. Unique and useful names, ideally, so that they can be used for navigation. In this post I’ll explain how browsers decide on the names for links, form fields, tables and form groups.
This post is based on part of my talk 6 ways to make your site more accessible, that I gave at WordCamp Rotterdam last week.
Accessibility Tree
When a user accesses your site, the server will send markup to the browser. This gets turned into trees. We’re probably all familiar with the DOM tree, a live representation of your markup, with all nodes turned into objects that we can read properties of and perform all sorts of functions on.
What many people don’t know, is that there is a second structure that the browser can generate: the accessibility tree. It is based off the DOM tree, and contains all meta information relation related to accessibility: roles, names and properties. Another way to say it: the accessibility tree is how your page gets exposed to assistive technologies.
Assistive Technology
Assistive Technology (AT) is an umbrella term for all sorts of tools that people use to improve how they access things. For computers and the web, they include:
- alternate pointing devices, like a mouse that attaches to a user’s head
- screen magnifiers, they enlarge the screen
- braille bars, they turn what’s on the screen into braille
- screenreaders, they read out what’s on the screen to the user
All of these tools, to work efficiently, need to know what’s happening on the screen. To find out, they access Platform APIs, built into every major platform, including Windows, Mac and Linux. The APIs can expose everything in the OS, so they know about things like the Start Bar, Dock or the browser’s back button. One thing they don’t know about, is the websites you access. They can’t possibly have the semantic structure of every website built into their APIs, so they rely on an intermediary — this is where the Accessibility Tree comes in. It exposes your website’s structure. As I said, it is based on the DOM, which is based on our mark-up.
A handy flow chart
The accessibility tree exposes roles (is this a header, a footer, a button, a navigation?), names (I’ll get into those in a bit), properties (is the hamburger menu open or closed, is the checkbox checked or not, et cetera) and a number of other things.
If you want to see what this looks like on a site of your choosing, have a look at the Accessibility Panel in Firefox Developer Tools, or check out the accessibility info boxes in Chrome, Safari Tech Preview or Edge developer tools.
Accesssible name computation
Names are one of the things the accessibility tree exposes for its objects. What a thing’s name is, gets derived from markup. There are many aspects that can influence this. If you want to know this in detail, check out the Accessible Name and Description Computation Specification.
Unique names help distinguish
Before going more into how to expose names, let’s look at which names we want. What the names are is crucial for whether they are accessible or not.
What if your family has four cats, and each of them is named ”Alice”? This would be incredibly impractical, as it would make communication difficult. “Has Alice been fed yet?”, you might wonder. “Is Alice outside?”, you might ask your partner. Ambiguity is impractical. Yet, this is what we do when our homepage has four news items, with each “Read more” as its link text.
Imagine all of your cats were named Alice (photo: stratman2 on Flickr)
This is very common, sadly. In the WebAIM Million project, in which WebAIM looked at over a million sites and ran automated accessibility checks, they found:
24.4% of pages had links with ambiguous link text, such as ‘click here’, ‘more’, ‘continue’, etc.
Reusing “Read more” as the link text for each news item makes our code and content management simpler, but it provides bad usability for screenreader users. When they use the link shortcut to browse through links on the page, they will have no idea where each links leads them. In the example above, when you ask an AT to read out all links, it will read “Link Read more, Link Read more, Link Read more, Link Read more”.
Naming things
So, unique and descriptive names are useful to AT users. Let’s look at which HTML can help us provide names. As I said before, the heuristics for determining names are in a spec, but with just HTML providing names for most things is trivial. The following section is mostly useful for people whose HTML is rusty.
Links
The contents of an <a>
element will usually become the accessible name.
So in:
<a href="/win-a-prize">
Win a prize</a>
the accessible name would compute as “Win a prize”.
If there’s just an image, its alt text can also get used:
<a href="/win-a-prize">
<img src="prize.png" alt="Win a prize" />
</a>
And, to be clear, if there’s nothing provided, the name would be null
or empty string, so some people would be unable to win any prize.
Form fields
Form fields get labelled using the <label>
element. In their aforementioned research, WebAIM also found:
59% of form inputs were not properly labeled.
Let’s look at what a labelling mistake could look like:
<div>Email</div> <!-- don't do this-->
<input type="email" id="email" />
In this example, the word “Email” appears right before the input, so a portion of your users might be able to visually associate that they belong together. But they aren’t associated, so the input has no name— it will compute as null
or ''
in the accessibility tree.
Associating can be done by wrapping the input in a <label>
element, or by using a for
attribute that matches the input’s id
attribute:
<label for="email">Email</label> <!-- do this-->
<input type="email" id="email" />
Tables
To give a table a name, you can use its <caption>
element. This is used as the first element in a <table>
.
Groups in a form
Within forms, you sometimes want to group a set of form inputs, for example a collection of radiobuttons or checkboxes that answer the same question. HTML has <fieldset>
for grouping form elements. To name this group as a whole, use the <legend>
element:
<fieldset>
<legend>Don't you love HTML?</legend>
<input type="radio" name="yesno" id="yes"/>
<label for="yes">Yes</label>
<input type="radio" name="yesno" id="no"/>
<label for="no">No</label>
If you were to inspect this fieldset in the accessibility tree, you will notice that the group is now known as “Don’t you love HTML?”.
What about ARIA?
Those familiar with the Accessible Name and Description Computation spec might wonder at this point: doesn’t ARIA also let us give elements accessible names? It totally does, for instance through the aria-label
/ aria-labelledby
attributes. When added to an element, they overwrite the accessible name (if there was one).
Good reasons to prefer standard HTML tags over ARIA include:
- better browser support (a lot of browsers support most ARIA, but all support all HTML, generally speaking)
- more likely to be understood by current or future team members that don’t have all the ARIA skills
- less likely to be forgotten about when doing things like internationalisation (in your own code, or by external tools like Google Translate, see Heydon Pickering’s post aria-label is a xenophobe)
Sometimes ARIA can come in handy, for example if an element doesn’t play along well with your CSS (like if you want a Grid Layout in a fieldset
), or if your (client’s) CMS is very inflexible.
It’s the markup that matters
In modern browsers, our markup becomes an accessibility tree that ultimately informs what our interface looks like to assistive technologies. It doesn’t matter as much whether you’ve written this markup:
- in a
.html
file - in Twig, Handlebars or Nunjucks
- as the
<template>
in a Vue Single File Component - exported in the JSX of your React component
- outputted by a weird legacy CMS
It is which markup that determines if your site is pleasurable to experience for AT users. In short: it’s the markup that matters
There’s good chance your site already uses all of the above HTML elements that name things. They have existed for many years. But I hope this post explains why it is worth the effort to always ensure the code your site serves to users, includes useful names for assistive technologies. Markup-wise it is trivial to assign names to all things on our site, the real challenge is probably two fold. It’s about content (do we come up with useful and distinguishable names), and about tech (can we ensure the right markup gets into our user’s DOMs).
Originally posted as Naming things to improve accessibility on Hidde's blog.
Content and colour at #idea11y in Rotterdam
Last night I joined the ‘inclusive design and accessibility’ meetup (idea11y) at Level Level in Rotterdam, to learn more about optimising content and colors.
When I talk about accessibility, I often approach the subject from a front-end code perspective, and I make a point of disclaimering: this is only one of the many aspects to ensure the accessibility of a website. Content and colours are two other very important aspects that are essential to get right, so it was great to hear Damien Senger and Erik Kroes talk us through those subjects.
Content
Damien Senger, designer and accessibility advocate at Castor EDC, talked about cognitive impairments: ‘they are interesting, because they are often invisible’ (Damien’s slides). This is a great point, it also implies that these aspects are harder to catch in automated tests. Lighthouse can’t warn about them like it can about ill-formatted ARIA. Cognitive impairments we can optimise for include dyslexia, autistic spectrum disorder and ADHD. Users with such impairments benefit from less content, better organised content and more readable text.
Damien explained we read not by word but by ‘saccade’, jumping between parts. He also showed how shapes are important, and lack thereof bad for readibility, for instance all caps or justified text. These four C’s help, Damien said: continuity (repeat same information for reassurance), conspicuity (make it obvious), consistency (use same wording for same things) and clarity (ensure copy is understable).
Damien gave lots of tips for designing, on four levels:
- micro typography: too much contrast can make letters dance for dyslexics, so dark grey is preferred to black; breaking highlighting is bad at people use that to see where they are
- macro typography: think about heading hierarchy and group related things
- layout: ensure consistency of layout and always offer multiple ways to find content
- global: avoid distracting users with things like autoplaying videos; reduce content
Colour
Erik Kroes, accessibility specialist at ING, talked about colour, specifically what colour contrast means and how to create a for-screen colour palette for your brand that can be used with black and white accessibly.
Most people will know that colour contrast is important and use tools like Lea Verou’s awesome Contrast Ratio. For AA-level conformance, WCAG 2.1 prescribes contrast ratios of 3:1 (large text) and 4.5:1 (all text). These tools simply tell us whether our colours are within the range. Erik wanted to know more than just the numbers and had great feature requests (‘why do none of these tools show contrast with black and white at the same time?’).
Most of the talk was about creating colour schemes that lead to sufficient contrast ratios. Erik showed how HSL doesn’t really help when working out a contrastful scheme… to really get into finding colours with enough contrast, we need to look at ITU-R Recommendation BT.709, a standard from the 90s that describes colors for television screens that takes the human eye’s workings into account (‘useful luminence’, Erik called it). I had no idea, that this is what WCAG colour contrasts ultimately are based on. The reason this 30 year old standard is still somewhat suitable, is that screens still use sRGB and human eyes are still the same.
Erik explained that in WCAG 2.1, ratios range between 1:1 (zero contrast) and 21:1 (black on white). What I never realised, and am still unsure if I fully get how, is that when a colour has a ratio of 3:1 on a white background, it has a 7:1 ratio on a black background, because 21 divided by 7 equals 3. The maximum of 21 makes that the set of accessible colour combinations is limited.
To help with finding colour schemes that work well with black and white texts, Erik recommended the Contrast Grid tool. He also works on his own tool to meet his contrastful color needs, this looks very exciting (and is Web Components based).
Erik’s tool
Wrapping up
This was a great evening, thanks to Job. Rumour has it that there will be another one next month, so I am looking forward to that!
Originally posted as Content and colour at #idea11y in Rotterdam on Hidde's blog.
Book review: The age of surveillance capitalism
This week I read The age of surveillance capitalism by Shoshana Zuboff, an enlightening book about how big tech companies are watching what we do, and what their end game might be. I thought I had heard all of this, but, of course, I couldn’t be more wrong.
Beyond “You are the product”
You’ve likely heard people say ‘if a product is free to use, you are not the customer, you are the product’. It’s a bit of a cliché now, that Zuboff completely reframes in The Age of Surveillance Capitalism. We aren’t exactly the product… we, or the data that describes our behaviour, to be more precise, are the raw material of a whole new iteration of capitalism.
The book is huge, but only just over 500 pages, the rest is footnotes
The Age of Surveillance Capitalism explains this new iteration in the context of early capitalism (Ford’s, and even the age of gilds). Zuboff explains how the founders of Google, and later Facebook and others too, set the scene: how they found use for data that previously looked useless, their secrecy about exposing the size of their data collection and their lobby against rules. Yup, they prefer to make their own rules. And will insist that their way is the only way, which the world just needs to accept: Zuboff coins this ‘inevitabilism’.
This attitude of setting their own scene is a recurring theme throughout the book. For instance, when Facebook does behavioral research, they don’t follow “The Common Rule“, Zuboff explains, that academic experimenters apply to themselves to avoid power abuse. This abuse in a corporate setting could be much worse than that in an academic one, as there is more conflict of interest in play (in the corporate setting there is a need to earn profits for shareholders).
A new iteration of capitalism
So what’s this new iteration of capitalism? Zuboff explains that in this new paradigm, data is the raw source or ‘commodity’, extraction of it the production process (a supply route that the big tech companies work hard to protect) and predicting people’s behaviour is the product.
The production process (‘extracting’ data from people) has expanded massively over the past years, as tech corporations found new ways to get behavioural data (‘behavioural surplus’, as Zuboff calls it). They offer more and more services, sometimes increasing their speed by purchasing other companies (Facebook bought WhatsApp and Oculus, Google bought YouTube). And it isn’t just online either. Tech companies discovered there is lots of data to be gathered in the real world, through physical things with internet connections. Google Maps cars that ‘accidentally’ take WiFi hotspot data. Hotel rooms that come with Alexa built-in. And sometimes they give rewards in exchange, like an insurance company that offers a lower rate in return for replacing a monitoring device in your car.
There is this notion in the book that we accept too much without foreseeing the full picture. At some point Zuboff describes the nature of contracts online: legally they likely cover all they need to do, but might it be unreasonable to assume that anyone ever reads them? What if they refer to other contracts? The large scale data extraction operetion, that Zuboff later calls Big Other, may be a case of the horseless car syndrome, she suggests. When the first car was invented, we were used to horses and carriages, and thus that was how we tried to name and understand this new phenomenon: as a horseless carriage.
The dystopia of a data-driven society
The latest big thing isn’t just studying behavior, it is influencing behavior, too. Experimenters at Facebook found they could hugely influence people in whether they would go vote or not by using techniques from behavioral psychology. With these techniques, humans become merely tools. Zuboff introduces the notion of instrumentarialism, which she compares do and notes is quite different from totalitarianism. They both overwhelm society and do some sort of social domination, but for different reasons, the market vs the political realm.
Zuboff shows how the world in which surveillance capitalism plays the role it plays in ours, is somewhat dystopian. She compares it to Orwell’s and Skinner’s. One of the aspects that makes it dismal is that humanity gets reduced away. The tech leaders are utopians and believe in a “data-driven society”, Zuboff says. They insist that that is ’practical and feasible but also a moral imperative in which the benefits to the collective outweigh all other considerations’ (438). People like Pentland don’t believe in individuals with opinions and attitudes to live… if we see others do something, we’ll copy, we are easily conditioned.
Is this a conspiracy theory?
I have to ask, as last week a group of Dutch actors accused of hyperbolic complot theories held up the book in a talkshow as their bible. It contains some strong claims and assumptions about people’s intentions, but overall Zuboff’s book is very thorough. Her claims and statements are consistently supported by extensive footnotes.
A lot of the practices she asks questions about, actually happen. Tech leaders don’t deny them, and sometimes even boast about them, Zuboff shows. This is in the realm of private companies. What if the instruments of these private companies get used by authoritarian governments? Zuboff talks about how private companies helped American intelligence agencies with all sorts of technology. China’s Sesame Credit systems rates people’s character and gives or denies them perks, including things like purchasing rail tickets and using dating apps. The numbers are staggering. Again, these are not conspiracies, they actually happen.
It is also important that this argument isn’t against capitalism, it is against systems that break basic principles of democracy and human autonomy.
Conclusion
The age of surveillance capitalism is a great read, be sure to get a copy if you want to understand what surveillance capitalism is and could become. It does a great job at demonstrating what she presumes are the dynamics behind the internet economy. It’s convincing, and thought-provoking. It has gotten me worries, but also strangely positive about which world I want to fight for. It’s not an easy or quick read, as it is an academic book and sentences tend to get long and detailed. Alternatively, you can hear Shoshana Zuboff talk about surveillance capitalism at The Intercept or her interview with Mozilla’s IRL podcast.
Originally posted as Book review: The age of surveillance capitalism on Hidde's blog.
Component frameworks and web standards
At last year’s Fronteers Conference, I had numerous conversations about the merits of using client-side frameworks for virtual DOMs, declarative templating and routing, versus a classic, vanilla approach to building websites. I had always been a vanilla-fan, but built a recent project in a framework, liked that too, and, well… I think they aren’t the polar opposites that they used to be in my head!
This post has three parts: in the first, I look at what I like about “the web standards stance” or a “vanilla approach”. In the second, I share what I liked when I used a JavaScript component framework. In the last part, I look at whether these two approaches are actually different: maybe I assumed a false dichotomy?
The web standards stance
As I see it, Fronteers Conference has traditionally been very web standards focused. We had no talks about Sass until years after it came about. We did hear about new CSS specs years before they were in browsers. Standards over abstractions. Progressive enhancement and accessibility have been recurring themes from the very first event. I’m not saying any of these things are superior over others, just that learning about them influenced me and many other regulars in our thinking about the web.
Personally, I have until recently managed to avoid JavaScript frameworks in projects. Did I miss out? Maybe. I did my front-end work in teams with a rather traditional stack. Often, they used Node to turn abstractions of HTML, CSS and JavaScript into browser-ready code. But the browser would not be sent much JavaScript and there would be no virtual DOMs.
It’s applying what’s in web standards directly, without a framework that wraps them. I’ll call this the ‘vanilla’ approach. It is basically a very minimal one, with no or few dependencies. Hand-written solutions for just the problems at hand. Some call it ‘not invented here syndrome’, which has a negative connotation, but there are also downsides to whatever the opposite of ‘not invented here’ is (packages can go bad, and what about christmas effects in production?).
I believe going vanilla is a totally valid approach to making sites. Large companies do it. Like GitHub, they even open sourced their components, but also Netflix, Marks and Spencers and others. Going vanilla is controversial, too, I found. When Sara Soueidan tweeted:
It’s really depressing that most useful tools these day are made for React projects and/or require React knowlegde to set up and use. This locks out many of us who are not using React for everything & who still prefer the vanilla route for their projects.
she got various people tell her she should learn React, some very unfriendly. But vanilla is a fine choice too, and more cross compatible, as her point proves: a vanilla component or library could work across multiple ecosystems or outside any ecosystem. A vanilla approach assumes the web itself as its ecosystem or platform.
It’s not just easier cross compatibility though. I have seen in real world projects, including some for very large organisations, that these traditional, vanilla stacks let us have fast, accessible and maintainable sites. By default.
- Fast, because (JavaScript) code you don’t ship to the user, is code they don’t have to download and their browsers don’t have to parse. See also the performance.now() keynote of Steve Souders, “grandfather of web performance”, and Alex Russel’s piece on ‘ambush by JS’: Can you afford it?
- Accessible, because the web is accessible by default. Because web standards are standards, they have the biggest chance of being the thing that makers of assistive technologies adopt to base their tech on. Using a
select
rather than a library that customises one and makes it inaccessible in the process (happens), increases the chance of the component to work for more people - Maintainable, or I should say, maintainable for potentially a longer time, because if you depend on less things, there is less chance your project stops working because of a third factor.
I’m not saying you can’t have these things if you use frameworks. As I said, from real world projects, I learned that traditional, minimal dependency stacks have these benefits. I should note here, you can also get these benefits if you use a framework: giving users more speed and accessibility is also the goal of some frameworks, and there are lots of people in the framework world that are very effective at improving such aspects through framework code.
(Yup, I find accessibility and speed, i.e. aspects that users experience, the most important criteria for judging my tools)
Having said all this, I tried a framework and I loved it! Probably not a surprise, as almost the whole world seems hooked to frameworks.
I tried a framework
In the last few months I built my very first framework-based front-end, in Vue.js. I complemented it with a router, a store and a GraphQL library, in order to have, respectively, multiple (virtual) pages, globally shared data and a smart way to load new data in my templates.
Those of you who work with front-end frameworks will likely nod along to the above description. Others may feel confuses and/or start laughing. Multiple pages, global scope and loading data can be done in web-default stacks with vanilla code. In way fewer lines. Multiple pages by literally creating pages and links between them, global scope by putting stuff in the window object (scoped to your app if you like), loading data with plain old XHR or fetch()
. Why let users download lots of kilobytes of framework code, if the web already has the functionality?
Well, let’s start with things I learned to love. If you have worked with front-end frameworks, you probably want to skip ahead.
Routers
Routers are pretty useful abstractions of which types of pages exists and what data they can have. It took me a lot of uneasy feelings to replace the plain old <a>
’s in my site with instances of Vue Router’s <router-link>
component.
For those unfamiliar with what this means: in an <a>
you use the href
attribute to say where the link goes. A <router-link>
points to a predefined named route
, and accepts in its to
attribute objects for settings that you invent, as well as query parameters.
So if you’d write an <a>
to go to a page with parameters:
<a href="/search?query=css&category=tutorials">
CSS
</a>
you can instead define it as a <router-link>
, a special component that comes with the Vue Router library:
<router-link :to="{
name: 'SearchResult',
query: {
query: 'css',
category: 'tutorials'
}
}"/>CSS</router-link>
(note this is a difference in the template only, the DOM and accessibility trees will only ever see something like the first)
Either is pretty readable. The router link is more abstract, if you don’t construct the URL yourself, you may make less mistakes and it may be easier for others what’s happening. This is not a client-side framework advantage, of course, you can find similar concepts on the server side, like in Flask routing. Routing is as old as the web itself.
The major advantage isn’t readability, it is that the client-side router ties in with the state your website is in. Because it has a fuller picture, it can make useful decisions like to only reload parts of the page. If you link to the same type of page, it may just update the difference in content. This does come with some focus and scroll management challenges, but that’s for another post.
Real components
Ask five people on a web team what a component is, and you may end up with wildly different definitions. Even in our world where most web teams made components central to their way of working, we struggle to mean the same by the word.
As a front-end developer, I’ve long thought of components as a blob of markup, with optional corresponding style and/or script, all living in the same folder. They would have a shared classname on the root element, which would be unique to instances of that component. Just by being in the same folder, they’d be a component for anyone working with the code. They exist because the team agrees not to clash names.
In declarative JS frameworks like Vue and React, components are defined as components. You define each component in the syntax the framework provides. This is arguably more rigid and meaningful than the ‘being in a same folder’ form of defining components I described above. In the case of Vue’s ‘single file components’, the JS knows of a component its name and possible properties (and their types). For associated scripts, it knows the component’s methods, life cycle hooks (a fancy word for: scripts to execute before or after component markup exists in DOM) and event handlers. For styling, it also knows the component’s associated CSS.
Reactivity
All template languages offer some form of variables:
<h1></h1>
The text inside the h1
becomes whatever header
resolves to. When there is reactivity, variables aren’t resolved once, but will update when they need to. For example, if a component is expanded
, the value of expanded
can update from true
to false
, at a programmatically determined time. Following whatever logic your component has for expanding. In online banking, the value for totalBalance
could be reactive and change on some event that is triggered by money coming into the bank account. The credit card logo URL could be reactive and change from mastercard.svg
to visa.svg
based on what credit card number is being typed in. That sort of thing.
Easy to get started
Both Vue and React have a command line tool that creates a project, with lots of stuff built in (create-react-app
and create-vue-app
). This scared me at first, as I was afraid it would by default put stuff into bundles I would send to my users. It turns out a lot of what create-*-app
setups offer is build and testing tools. My least favourite part of making websites is configuring tooling, a ‘just works’ default is good enough for my needs.
Also, wow, there is some very good documentation for these frameworks, that explain just enough to get started, but also let you dive much deeper if you want or need to. It’s all very welcoming and nice.
Thankfully, I also had great colleagues at work and peers at Fronteers Slack that I could direct questions to, have code reviewed by, et cetera. If you want to get started with a framework, I do recommend joining a Slack group for the framework or your area.
What I liked less
Backwards compatibility
Web standards are extremely backwards compatible. Very rarely do HTML tags get deprecated (hi <blink>
!), almost never do DOM methods or ECMAScript features get taken out. This makes sites built with vanilla code future proof. You will likely have to do security updates to old sites, but it is unlikely you’ll have to write your HTML, because someone decided fieldset
s are no longer supported.
If you’re using a framework, the features you rely on are more likely to disappear. When Vue just came out, one would loop through an object like this:
<ul id="tags">
<li v-repeat="tags">
</li>
</ul>
(example from Vue 0.11)
Then v-for
was introduced. My point is, this sometimes happens. Frameworks come up with better paradigms (see also Hooks and Suspense in React). When your site wants to update to a new version, your team needs to rewrite and know what to rewrite. All of Twitter is like “I’ve rewritten this project X to use Hooks”, but this is not trivial, even less so if it is the complicated codebase of your large corporate client.
There are lots of organisations out there using Angular 1 in projects, and I see them struggle to find developers willing to do the work. They’ve since moved on to newer frameworks. The orgs are stuck with outdated paradigms that are incompatible with the now. For freelance front-end developers in The Netherlands, recruitment calls for a certain bank that went all-in on Angular may sound familiar?
I feel this is a real difference between ‘just’ using web standards and buying into a framework. There is also legacy vanilla code, of course. But if that is, as I described in the introduction, very minimal, few dependency code, maintaining it is consequently less involved.
Different origins
I love the origin story of the web, and the idea that it was further developed at an independent consortium and groups like WHATWG. The origins of the big component frameworks are big corporations like Google (Angular) and Facebook (React). Corporations that are more for profit than for humans. Even-though the projects themselves are open source and have volunteer communities full of great humans.
A false dichotomy?
As I described above, there are lots of things to like about using modern JavaScript frameworks to build websites, and a couple of things to moan about. But let’s look now at how that approach differs from ‘just’ using web standards and vanilla code. Let’s look at how it differs from the approach of which I said it gives us good things like accessibility and performance by default. If there is a difference, because maybe frameworks versus web standards is a false dichotomy?
De-facto standards
Various frameworks from the past have inspired web standards. jQuery’s selector engine Sizzle inspired querySelector
/querySelectorAll
. The modern component frameworks seem to diverge towards standard ways of working, both in syntax patterns and in broader concepts, like routing and reactivity. They are not in specs, but they could inspire new web standards. As far as I’m aware, nobody is standardising concepts like reactivity, but there for example this proposal for declarative routing in Service Workers, and among the new proposals in ECMAScript there are many ideas inspired by how people write framework code, like optional chaining. Still, epic standard battles to make features happen are epic.
Framework-browser collaboration
Framework engineers and browser makers collaborate, said React engineer Andrew Clark:
Our team has a regular weekly meeting with Chrome engineers. Sometimes we meet more than once! We’re collaborating on display locking, main thread scheduling, resource prioritization, and more. Great relationship, much credit to @stubbornella and @shubhie
Chrome PM Nicole Sullivan confirmed this:
My favorite part of my job is collaboration with frameworks. The web platform engineers are beyond excited about the tight feedback loops we’ve developed and are developing. We even added getting framework feedback to our intent to implement process.
I don’t really know of such a direct collaboration between framework authors and the Firefox browser. Or other browsers. Such collaborations are quite different from web standards efforts, but I guess they can lead to improved performance and accessibility for large amounts of users. Less “open” improvements, but improvements.
Lots of vanilla code in frameworks components
If I’m comparing vanilla to frameworks, I should also say that within the framework wrapper, I’ve always ended up using vanilla knowledge. Not only is it still important to know which HTML elements there are and what they mean, because framework-based sites yield accessibility trees just like regular sites do. It’s essential to have a deep understanding of all things JavaScript (for instance, everything on Chris Fernandi’s JS roadmap, because the component frameworks offer just that: a framework to get started. Within that, it’s your choice if you want to solve your problems with your own custom solutions, or use libraries, plug-ins, etc Or, of course, and this is common, do either when it makes sense.
Progressive enhancement
Progressive enhancement and client-side frameworks do seem to be hard to marry, because by default they do their work in the browser. There are plenty of ways to make servers do the initial render, but if we are honest this is not trivial. Regardless of where we render, we can and likely should apply a progressive enhancement way of thinking to a framework-based project. To stretch what progressive enhancement means very far: make no assumptions. Not about your user’s devices, not about their permissions, not about their content blockers, not about any other of their choices.
For example:
- if you ultimately want to plot on a map the cars nearest to a user’s location, you could display a list of car locations initially. When users have location abilities, their browser’s support exposing those through the location API and they have granted your site permissions, then try and do the plotting.
- in a project I worked on, we generated identicons on the client. The process involved the Web Crypto API, to convert strings into hashed strings. In browsers that don’t support Web Crypto, we fell back to a hardcoded hashed string. Browsers that got the full thing showed avatars based on people’s username, so they were unique for each user, browsers all showed non-unique avatars based on the string ‘user’.
- you want to lay out a component of your site using CSS Grid Layout. To the 10-20% of browsers that don’t support this spec, you serve a simple fallback, so that all the content is usable, readable and looks not broken.
Conclusion
I love vanilla code, and things that are invented here. Thoughtful developers that solve a problem at hand. I’ve sometimes felt worried that working with front-end frameworks like React or Vue would be at odds with that mental model. I feel I was probably wrong there, there is a lot of grey between vanilla and frameworks, and people do all sorts of cool things on either side to get to solutions that are accessible and performant. This is great. I am glad I tried a framework and found its features were extremely helpful in creating a consistent interface for my users. My hope is though, that I won’t forget about vanilla. It’s perfectly valid to build a website with no or few dependencies.
Originally posted as Component frameworks and web standards on Hidde's blog.
Content-based grid tracks and embracing flexibility
Something I love about CSS is that it lets us define visual design for unknown content. This is kind of magic. We can even size things based on content, with min-content
, max-content
and auto
. This post is about how that works in CSS Grid Layout, and what usage in real projects would mean.
In Grid Layout, there are many ways to size a column or row (I’ll refer to them as ‘track’ from here). You can be absolute and define tracks in pixels, or even centimeters, which is pretty useful if you’re doing print work. You can use relative units too, for example relative to the root element’s font size (rem
), viewport (vw
, vh
, vmin
, vmax
), the width of a 0 (ch
)… any CSS length size, really. Of course, all of these sizing methods can be mixed in grid definitions. For example, you could make one column flexible, the other absolute and yet another one content-based. In fact, that’s often a sensible thing to do.
Why size at all?
In Grid Layout, you don’t have to size anything. If you don’t define (some) track sizes, their size will be auto
, based on the content they need to fit. You could still say which tracks you want with the grid-area
syntax, but leave sizing up to the browser. Or you could refrain from defining areas at all, in which case the browser will create tracks for each of your grid items, then size them.
The obvious reason to define some, most or all of your track sizes anyway, is because you have intentions about your lay-out. You want your content area to have a maximum length for better readability, or you want your ad bar to be certain size for business reasons.
Another reason to give some sizes, is that it saves the browser from parsing your content in order to figure out track sizing by itself, the spec notes:
when content-based sizing is used on an item with large amounts of content, the layout engine must traverse all of this content before finding its minimum size, whereas if the author sets an explicit minimum, this is not necessary
This is not an issue with small amounts of content, the spec concurs that that’s trivial, but it is a potential concern if you have lots of grid items or content.
Content versus context
The go-to CSS spec on sizing is CSS Intrinsic & Extrinsic Sizing Module Level 3. It deals with two ways to size: based on context and based on content.
Context-based or extrinsic sizing of an element is sizing based on the container it is in. This is often its parent, or it could be the viewport. The extrinsic size of a block element is the size of that container, minus padding and border. If you set an element to, say, width: 50%
, it is said to be extrinsically sized to be half of the container.
Content-based or intrinsic sizing, on the other hand, is sizing based on the element’s content, so it is independent from its context.
Before I continue: one place we already have widespread content-based sizing on the web is in tables. They size based on how much content is in the cells. If you place two tables on the same page, they’ll likely differ in size (like the schedule for day 1 and day 2 on the Fronteers website differ). This happens automagically, unless you start setting explicit sizes.
min-content, max-content and auto
So how do min-content
, max-content
and auto
work? Let’s take the following line of text (from Charles Darwin’s On the origin of species, 24):
Some facts in regard to the colouring of pigeons well deserve consideration.
If it is displayed in a browser using CSS, it will sit in a box. The minimum width of that box in order to fit this content, is the size that fully fits the longest word, so without ‘overflowing’. In this sentence that would be ‘consideration‘. To size this box to that word, CSS lets you use the min-content
keyword. If you set width: min-content
onto the element that has this text, it will be sized something like this:
Some facts in
regard to the
colouring of
pigeons well
deserve
consideration.
Note that if your box contains more than just words, more things influence min-content
, for example largest image or fixed size box.
The maximum width of that box is the width of this entire sentence. Think of the width it would require if white-space: nowrap
was applied. To size the box to the maximum that is required for the content, CSS has the max-content
keyword.
Keep in mind that min-content
and max-content
are all about sizing in the inline direction, the one in which words flow, as opposed to the block direction, in which blocks, like paragraphs, flow.
In CSS, we can use the min-content
and max-content
keywords as sizes for elements that contain content. In Grid Layout specifically, we can use the keywords to size tracks. Say you’re using min-content
to size a column, it will be calculated based on the minimum required for content throughout all the rows in that column.
And then there’s auto
. This keyword can mean different things throughout CSS (for a more in-depth look, see fantasai’s Defining auto at CSS Day 2015). For example, in block level elements an auto
width takes up the full available width, while in inline level elements, it takes up just the space needed for the content. In grid tracks, it behaves similar-ish to inline elements, it will take up the space of the content. One exception is that if there is a grid item inside the track that has a size, that size can make the whole track grow.
Embrace the web’s flexibility
As I said above, I really like content-based sizing, so it bugs me that I don’t see it used a lot in real-world projects. Websites don’t need to look the same in all browsers, right? That theory is convincing, at least in the circles of front-end developers and designers, but clients often expect more sameness across browsers than we’d like. Even within teams, this is common.
Going from websites that look the same in all browsers, what about looking the same with all content? Content-based sizing units are potentially very useful, but will they make sense on production websites? I see two potential hurdles: it could still be hard to incorporate them in our design processes, and they could create purposeful inconsistencies that some may see as mistakes.
Design process
One of the hardest problems in designing for the web is that the designer has to deal with unknowns. Websites have CMSes, so content can change. It’s likely unknown. Users come with all sorts of devices and screens, so canvas size is mostly unknown, too.
CSS does a fantastic job at giving us tools to solve that exact problem. If you want a certain amount of characters in a column, you don’t need to know what the characters are in order to get what you want. Just write what the rule is, and the browser worries about how to apply it to actual content. But even if this is solved in CSS, that doesn’t mean it is solved in the tools we design websites in. I have met with designers who write CSS and design in the browser, but design in software like Sketch is also very common.
If we want to use content-based sizing in designs and use design tools that are not the browser, whoever writes the CSS should demonstrate what the web can do. The CSS person can show how flexibly built websites can work better in different languages, on different devices and for different people. Designers and developers can team up, make demos. Browsers can help too, they are improving design tools to expose what’s happening. Tools like the new flexbox inspector in Firefox Dev Tools can bring design and code closer together.
Is intentional inconsistency ok?
What does it mean if subsequent pages end up having slightly (or wildly) different grids? This may create a confusing or “broken” user experience. I think inconsistencies can be very powerful, because they lead to an ideal space distribution. The track that needs most space, gets most space (in an auto
scenario). This is ideal for content, but does it yield the ideal for visual design and user experience? I don’t know, maybe?
Content-based sizing could be most effective in components that exist once on a page, so that there are no inconsistencies. Or in pages that are quite unique, like temporary landing pages and one off sites for conferences, exhibitions and the like.
Wrapping up
With the min-content
, max-content
and auto
keywords, we can size grid tracks based on their content. I think this is very cool. If we manage to embrace as much of the web’s flexibility as we can, we can get the most out of these tools, and let CSS help us with designing for the unknown.
Originally posted as Content-based grid tracks and embracing flexibility on Hidde's blog.