Reading List

The most recent articles from a list of feeds I subscribe to.

How to make inline error messages accessible

On one of my projects I am helping a governmental organisation to take their application forms to the web. They are mostly very complex forms (for reasons). We do our best to help people fill out the forms correctly and identify incorrect input to them where we can. In this post I will go into some ways to do this accessibly.

Commonly, when an error occurs, an error message is inserted into the page. If you want to do this accessibly, some things to look at are identifying errors as errors, notifying the (screenreader) user of new content and describing what happened in clear language. Below, I will go into how you can improve things in those three areas.

As per WCAG 3.3.1 Error Identification, this is what we need to do:

If an input error is automatically detected, the item that is in error is identified and the error is described to the user in text.

Yes, we can script it

Before we go into identifying validity and error messages, a little note about using JavaScript.

In the old days, a form submit would just trigger the same page to reload, now with error messages where applicable. In 2017, this is still a good practice, but it can be hugely enhanced by preventing the submit and displaying errors client side. And why wait until the submit? You could insert an error message as soon as a user tabs out of a field.

Shouldn’t accessible forms avoid JavaScript? No, on the contrary. They can use JavaScript to inform users about errors as they occur. Combined with a server side approach, this can give us the best of both worlds.

Needless to say: using good old HTML elements is key to your forms to be successful. In this post I go into some ARIA techniques, but always keep in mind that the first rule of ARIA is to not use ARIA. Paraphrasing, this rule means that if you can do it with HTML, don’t use ARIA. The exception to this for things that are not available in HTML. Whether error messages getting injected into the page fall under that umbrella is a case of ‘it depends’. There is the native Constraint Validation API, but it is problematic in many ways (see PPK’s 3 part article on form validation), so in this article I am assuming we are rolling our own validation with JavaScript.

Indicating a field has invalid input

When you detect a field is invalid, you can apply the aria-invalid attribute to it. Then, make sure your scripts are watching the relevant field, so that you can remove the attribute once it no longer applies.

If you were using native HTML form validation, you could style invalid input using the :invalid pseudo class. If your JavaScript is determining what’s valid our not and the aria-invalid attribute is added/removed appropriately, it provides a convenient way of indicating invalid input:

[aria-invalid] { border-color: red; border-right: 5px; }

(Codepen)

Adding and removing the aria-invalid attribute also helps users of screen readers recognise they are on an invalid field. Fields with the attribute are read out as being invalid in JAWS, NVDA and VoiceOver.

Conveying that an error message appeared

When your script has detected input of a field is not valid, it inserts an error message into the page. Make sure it is easy to see that this is an error message, for example by using a combination of colors, an icon and/or borders. It also helps if you indicate that it is an error by prefixing the actual message with ‘Error: ’.

To make sure our error messages are read out to users with screen readers, we need to ensure the accessibility tree is notified of the newly inserted content.

There are two ways that are roughly equivalent, both making use of ARIA attributes: live regions (aria-live) and role=alert (avoid using both at the same time). They turn an HTML element into a live region, which means that a screenreader will read out changes to that element. A change could be that something is appended to the element. Note that for this to work, the element itself has to be present in the DOM on page load. Live regions or role=alert work in VoiceOver and most versions of JAWS and NVDA.

In the implementation I describe below, this would be the flow:

  • User makes mistake in field 1, tabs to field 2
  • Script detects mistake, reads out that field 1 is incorrect (or conveys it through big red error message appearing)
  • Meanwhile, user is in field 2 and can decide to go back and fix, or continue and fix later

Live region

To turn an HTML element into a live region, add aria-live to it. There are various modes, depending on whether you want updates to be read out immediately or not. In our case, we do want that, and will use aria-live="assertive" :

<div aria-live="assertive">
<!-- insert error messages here -->
</div>

If we are watching for fields to become incorrect, it makes sense to apply the same functionality in reverse. If the field is changed and input is now valid, we can remove the errors. We need our live region to be aware of both addition and removal of error messages. With aria-relevant we can set it up to do exactly this:

<div aria-live="assertive" aria-relevant="additions removals">
<!-- insert error messages here -->
</div>

(Codepen with live region examples)

If we want to control whether we need the whole region to be read out, we can use aria-atomic . If it is set to true, the whole live region will be read out if anything changes. It defaults to true. It depends on your situation which setting fits best. If your error messages are displayed in a question-specific live region, this might make sense. If error messages for the whole form live in one live region, it may be a bit too much.

role=alert

Instead of manually setting up a live region, you can also give the error are a role=alert :

<div role="alert">
<!-- insert error messages here -->
</div>

This turns it into an assertive live region automatically, with aria-atomic set to true.

Focus management

When inserting a new error message

There is no need to do anything with focus after you have inserted an error message. When you insert it as the user tabs to the next field, they will likely have focused the next field. Intercepting TAB to steal their focus and bring it back to the error message would result in an unexpected experience for the user. I would consider this an anti-pattern.

When preventing submit, because there are still errors

If you have prevented submit, because there are still errors on the page, it could be useful to send focus to a list of errors that you have prepared at the start of the form. Bonus points if each item on that list is a link to the invalid item, to make it easier to correct.

Note that if you use this approach, it may conflict with your assertive live regions, as they will get read out before your list of errors. It may be better in this case to choose between the two approaches.

Language

The above is a quite technical approach that optimises the situation for screenreader users. Another accessibility feature that can be optimised and will improve things for all of your users is the use of language.

Whether your form is complex or simple, make sure your error messages are easy to understand. Use clear and concise language. Be helpful in explaining what is wrong and how it can be fixed. This aids the user to complete the form as smoothly as possible.

In summary

In this post, I have discussed three ideas to improve the accessibility of dynamically added error messages:

  • identify a field as being currently invalid with the aria-invalid attribute
  • notify the user when a new error message has appeared, by inserting them into a live region / role=alert
  • always make sure you explain what went wrong and how to fix it in clear and concise language

Any feedback is welcome!


Originally posted as How to make inline error messages accessible on Hidde's blog.

Reply via email

My last day at Fronteers

So… tomorrow is the end of me volunteering at Fronteers. As of then, I am no longer part of Fronteers’ workshop team. With that, I am officially no longer volunteering at Fronteers. Time for a round-up!

Front-what?

Fronteers is the professional association for front-end developers in The Netherlands. It has about 500 paying members. Some of those volunteer to put up conferences, workshops, meetups, a job board and a Slack community. It was founded in 2007 by PPK, Arjan and Tom and has grown and changed in many ways since.

In this post I will look back at my volunteering for Fronteers, and share some things that I learned.

The cool thing about Fronteers is that it is an organisation that is bigger than its individual volunteers. Although it often heavily depends on the spare time of specific people, stuff can go on. Even when people leave and new people come in. Fronteers history shows this is not always easy and not every detail survives. Often, new, awesome details come into existence, too. Progress happens.

Round-up

For those who find it interesting, I’d like to share a bit of my history at Fronteers. With my apologies for what is sort of blowing my own trumpet – quitting after all those years is quite a big deal to me.

  • I volunteered at six conferences (2008-2013), of which I was in the organising team of three, one as the chair. I was able to add a few new things to the mix, like subject-first speaker selection and a practical case studies session.
  • I organised meetups about the semantic web (with SETUP), healthcare and online news (with Lable), RequireJS and MVC (with VPRO), Progressive Web Apps (with Dan and Bran), accessibility (with Thomas and Michiel) and music (with Thomas and Michiel)
  • I was involved with about 12 workshops (some in 2012, some in 2016/2017), including one about CSS with one of the inventors of CSS
  • I had two sets of accounts prepared as treasurer and transitioned us away from the accountants that did not deliver many of their promises (it was quite the chase)
  • I worked on getting 61 of our conference videos captioned (with Krijn and Janita)
  • I chaired the marketing team from 2011 to 2013 (and was team member from 2008), which produced legendary pens, notebooks, promotional slides, as well as BEERS with the Fronteers logo on

More recently, in August last year I revived the workshops team. With the support of Rachid and Sharon back then, joined by Tim later, we managed to put up a one day workshop every month from September (with one cancelled). This is still happening, with the April one sold out and an exciting one for May in the pipeline (soon to be announced). I found it a lot of fun to be involved with this. It’s all volunteer work, but you get to spend the association’s money on good teachers to provide affordable workshops for all. What’s not to like?

I learned things

I’ve really enjoyed doing all of these things! My feelings about my recent stint at the workshops team is a general theme in most of the things I’ve been able to do at Fronteers. It is an organisation with international connections (and goodwill), money to pay for activities that aim to improve front-end development, and an endless stream of ideas waiting to make it into reality.

Working for Fronteers I learned a lot, although hardly about front-end development.

I learned, for example, that you can try getting things done by yourself, but working together with people who complement you makes it a lot easier.

I discovered that work in an association like Fronteers involves a lot of politics. It takes time to convince people of your ideas, and to learn who needs to be convinced most (this can be the oppposite of what you expected). This takes patience.

On the subject of politics: at times, there would be different ideas at how to go ahead. If we would be working together at a large business, strict hierarchy would have dictated the decision making process. With a group of volunteers that all put lots of love (and time) into their work, debates can be more heated. At these times it is extremely important to look at the bigger picture, because the heat is usually only within the team. Outsiders, or attendees, will likely just see an awesome conference, meetup or workshop.

I found it’s so hard to get it right (we often did not), and that sometimes the best ideas come in hindsight.

I learned to live with not realising ideas, because of time constraints. I tried setting up a system to manage contacts between Fronteers and the outside world, organise more meetups outside of Amsterdam, redesign all of the website with Krijn (we even had one of the best Dutch agencies on board to help), present the financial budget in a way that is less boring and organise meetups specifically to bring volunteers together to work on organising shared activities. Time was a constraint.

I also learned that me being personally delighted about a subject or speaker would not guarantee lots of attendees for an event. Especially in the week before Sinterklaas.

Or that it is important in volunteering to learn to say ‘no’ before it is too late. This helps with focus, and with being able to meet promises and expectations.

I’ve gotten to know this fantastic feeling that goes with putting on something that people enjoy to attend, whether it be workshops, meet-ups or conferences.

Thanks and goodbye

I’m very grateful to Krijn and Arjan for inspiring me to start volunteering at Fronteers, and to PPK, Tom, Sander and Jaco for trusting me to organise things within the professional association they chaired. They were always encouraging as well as very forgiving and patient with regards to my mistakes.

I should also thank all other volunteers that I had the pleasure of working with, many of whom who have become friends. You know who you are.

I’m going to be focusing on other things now, and become a consumer of Fronteers activities. It has truly been a blast!


Originally posted as My last day at Fronteers on Hidde's blog.

Reply via email

Perfwizardry

When Harry Roberts tweeted he would be free to give a workshop in The Netherlands this week, Joël approached Xebia and Fronteers and made it happen. Almost 20 of us gathered in the Wibautstraat in Amsterdam and spent a whole day getting all nerdy about web performance.

Harry told us all about how to sell performance, how (and what) to measure and which tools to use. And of course, how to fix the problems you measure. He also went in to how all of this is impacted by HTTP/2 and Service Workers (spoiler: turn them off when performance testing). The workshop came with plenty of theory, which I liked, as it helped understand how things work in practice. We ended by scrutinising a number of actual websites using the things we learned.

Please find some of my notes below.

Selling performance

  • When trying to sell performance to others in your company, tune the message towards the audience. With performance, it is easy to get all excited about all the technical stuff, especially as a developer. Concurrent downloads in HTTP/2 may interest your audience less than the forecast of paying less for bandwidth.
  • Use stats. If your company does not keep them, you can also refer to stats of others, WPOstats is great.
  • Your users may have capped data plans, they may be in an area with poor network conditions. Think of the next billion internet users, they may not have fast iPhones and great WiFi. See also What does my website cost and Web World Wide.

What to measure

  • Testing on your own device with your own network helps and it is easy, but it is nowhere near as valuable as Real User Monitoring (RUM). There are services that help with this, such as SpeedCurve and Pingdom. Google Analytics also provides some rudimentary RUM tools.
  • Beware of metrics such as page weight, number of requests and fully loaded. It depends on your situation if and how those apply to your performance strategy.
  • Perceived performance is better to focus on, although it is harder to measure. Many automated tools require thorough evaluation. Human judgment is essential.
  • Speed Index is a good metric to focus on. It is a ‘user-centric measurement’: the number is based on visual completeness of a page.
  • You can use Speed Index’s median as a metric in Web Page Test by adding ?medianMetric=SpeedIndex at the end of your Web Page Test url.

The network and requests

  • TCP/IP comes with head of line blocking: if one of the packages fails to come through, the server will request it again and hold up other packages in the meantime.
  • The network is hostile, assume it will work against you.
  • Beware of the difference between latency and bandwidth. Latency is how fast a certain path can be travelled, bandwidth is how much we can carry on our journey. Compare it with a motorway: latency is how fast we can drive, bandwidth is how many lanes are available for us to use.
  • The Connection: Keep-Alive header will leave a connection open for other transfers to happen, preventing new connections (with new DNS looksups, handshakes, etc) having to start all the time.
  • High Performance Browser Networking by Ilya Grigorik is a great book and it is available online for free. It is very technical, the mobile section is especially great.
  • Avoid redirects, they waste your user’s time. This is especially true on mobile, where requests are even more fragile.
  • Beware of whether your URLs have a trailing slash. If you are not precise about where you link to, you can cause redirects to happen. For example, if your ‘About’ page is on yoursite.com/about/, you link to yoursite.com/about and you’ve set up a redirect, you can save your user from this by just linking to the right path in the first place.
  • Elimate 404s, as they are not cached (this is a feature).
  • Make use of cache, do not rely on it.
  • Use compression, for example gzip.
  • Concatenate your files to circumvent the HTTP’s limit on concurrent downloads (stop doing this when using HTTP/2).

Resource hints to increase performance

In which was likely the most exciting section of the workshop, Harry went into resource hints: DNS Prefetch, Preconnect, Prefetch and Prerender. Web performance hero Steve Souders talked about these concepts at his Fronteers 2013 talk “Pre-browsing”. Browser support for resource hints has improved massively since.

  • If you use both dns-prefetch and preconnect, set the preconnect header first, the browser will then skip over the dns-prefetch header as it will have already prefetched the DNS.
  • Long request chains are expensive: this is what happens when you include a CSS file, which imports another, which then requests a font, etc. The browser will only know about the font when it has parsed the second CSS file. With the preload you can tell the browser to preload assets, so that it already has them when the second CSS file requests them.

Anti patterns

  • Avoid Base64 encoding as it has little repetition and therefore gzips badly.
  • Don’t use domain sharding (the practice of using different domains to allow for more concurrent downloads) for any assets that are critical to your site, such as your (critical) CSS.
  • Many of today’s best practices are HTTP/1.1 oriented. When you start serving over HTTP/2, beware some may become anti patterns.

The future

  • If you serve over HTTP/2, you can benefit from multiplexing and concurrency (download all assets at the same time), improved caching, much smaller headers and Server Push, which lets the server send files the user did not yet request
  • Service Workers can speed up things tremendously as you can use them to have more granular control over what you serve from where.

Although I was familiar with most of the concepts covered, I did pick up lots of things that I did not know about. There were also practical examples, and Harry explained everything with lots of patience and at just the right speed. If Harry is coming to your town with his performance workshop, I would recommend coming along.


Originally posted as Perfwizardry on Hidde's blog.

Reply via email

The importance of web standards and design for accessibility

Yesterday I attended the Inclusive Design and Accessibility meetup (“idea11y”) at De Voorhoede in Amsterdam, which was about development tools, graphic design and, the surprise act, the accessibility of React apps.

Florian Beijers, who currently interns at Microsoft, talked about his experience using text editors. He showed us popular editors like Sublime Text and Atom, and how they are impossible to use for him as a blind user. Some just read out BLANK BLANK BLANK, due to bugs in the underlying framework (Electron, in Atom’s case). It was easy for us all to conclude that the products had not looked into accessibility much. During the talk and Q&A two questions arised: why does this happen, and how hard is it to make products accessible? The why, Florian said, seems to be a combination of lack of knowledge and lack of education, which then also causes accessibility bugs to be deprioritised. If you don’t know that your designs or your code places a burden on some of your users, why make improvements? Then the how. According to Florian, adhering to standards can help a great deal in making things accessible. An example is labeling: if you do not use the standard way of labeling controls in your application, your user’s accessibility tools will not understand them.

Design consultant Agnieszka Czajkowska talked about accessibility from a design point of view. She emphasised that good design can help a great deal in removing barriers for some of your users. To come to good design, she said, it is important to include users early on, do accessibility by default and avoid treating it as an optional feature. This is something I personally see a lot in projects: specific user stories or backlog items for accessibility (‘as a user, I want all the things to be accessible’). It doesn’t work like that, every task should include doing it accessibly. In the Q&A, the discussion moved towards flexibility of interfaces. Some people need more contrast or larger fonts. If an interface is flexible enough, they can use their browser settings to do this. Agnieszka said this comes down to giving users agency: make sure your interface allows for rather than hinders user preferences. In her talk, she showed a practical example of this: on a site she used, text overlapped with other text when she set a larger font size. With responsive design, it is perfectly possible to avoid this.

De Voorhoede’s Jasper Moelker stepped in at the last minute to do a lightning talk that he had done before for the ReactNL conference. In the talk, he used examples from the React docs and showed how they performed in three browsers: Chrome (a modern browser), Opera mini (a proxy browser) and Lynx (a text browser). Sadly, most of the examples worked great in Chrome, hardly in Opera mini and not at all in Lynx. He then showed how to rewrite the examples by using semantic HTML. Basic markup 101 like pairing labels with inputs and wrapping form controls in form elements helped a great deal and made some of the examples work in all three browsers. Takeaway: semantic markup is extremely important. It is a great strategy to ensure your application is understood by the largest amount of browsers/devices possible. Again, as Florian also concluded, following existing standards helps a great deal. Another strategy Jasper recommended is to choose ‘mount nodes’ wisely: some turn their whole body into one big React app, which does nothing without React. Instead, you could turn specific parts of your page into mini React apps, and have the rest of the page contain sensible defaults. The latter likely deals more gracefully with failures.

This was my first idea11y meetup, and I have had a great evening. Many thanks to the organisers for putting this event up and De Voorhoede for their hospitality. I am looking forward to attending again!


Originally posted as The importance of web standards and design for accessibility on Hidde's blog.

Reply via email

Using JavaScript to trap focus in an element

For information on how to accessibly implement the components I’m working on, I often refer to WAI-ARIA Authoring Practices specification. One thing this spec sometimes recommends, is to trap focus in an element, for example in a modal dialog while it is open. In this post I will show how to implement this.

Some history

In the early days of the web, web pages used to be very simple: there was lots of text, and there were links to other pages. Easy to navigate with a mouse, easy to navigate with a keyboard.

The modern web is much more interactive. We now have ‘rich internet applications’. Elements appear and disappear on click, scroll or even form validation. Overlays, carousels, AJAX… most of these have ‘custom’ behaviour. Therefore we cannot always rely on the browser’s built-in interactions to ensure user experience. We go beyond default browser behaviour, so the duty to fix any gaps in user experience is on us.

One common method of ‘fixing’ the user’s experience, is by carefully shifting focus around with JavaScript. The browser does this for us in common situations, for example when tabbing between links, when clicking form labels or when following anchor links. In less browser-predictable situations, we will have to do it ourselves.

When to trap focus

Trapping focus is a behaviour we usually want when there is modality in a page. Components that could have been pages of their own, such as overlays and dialog boxes. When such components are active, the rest of the page is usually blurred and the user is only allowed to interact with our component.

Not all users can see the visual website, so we will also need to make it work non-visually. The idea is that if for part of the site we prevent clicks, we should also prevent focus.

Some examples of when to trap focus:

  • user opens a modal in which they can pick a seat on their flight, with a semitransparent layer underneath it
  • user tries to submit a form that could not be validated, and is shown an error message; they can only choose ‘OK’ and cannot interact with the rest of the page
  • user opens a huge navigation menu, the background behind the navigation is blurred (“Categorie kiezen” at Hema.nl)

In these cases we would like to trap focus in the modal, alert or navigation menu, until they are closed (at which point we want to undo the trapping and return focus to the element that instantiated the modal).

Requirements

We need these two things to be the case during our trap:

  • When a user presses TAB, the next focusable element receives focus. If this element is outside our component, focus should be set to the first focusable element in the component.
  • When a user presses SHIFT TAB, the previous focusable element receives focus. If this element is outside our component, focus should be set to the last focusable element in the component.

Implementation

In order to implement the above behaviour on a given element, we need to get a list of the focusable elements within it, and save the first and last one in variables.

In the following, I assume the element we trap focus in is stored in a variable called element .

Get focusable elements

In JavaScript we can figure out if elements are focusable, for example by checking if they either are interactive elements or have tabindex .

This gives a list of common elements that are focusable:

var focusableEls = element.querySelectorAll('a[href]:not([disabled]), button:not([disabled]), textarea:not([disabled]), input[type="text"]:not([disabled]), input[type="radio"]:not([disabled]), input[type="checkbox"]:not([disabled]), select:not([disabled])');

This is an example list of common elements; there are many more focusable elements. Note that it is useful to exclude disabled elements here.

Save first and last focusable element

This is a way to get the first and last focusable elements within an element :

var firstFocusableEl = focusableEls[0];  
var lastFocusableEl = focusableEls[focusableEls.length - 1];

We can later compare these to document.activeElement , which contains the element in our page that currently has focus.

Listen to keydown

Next, we can listen to keydown events happening within the element , check whether they were TAB or SHIFT TAB and then apply logic if the first or last focusable element had focus.

var KEYCODE_TAB = 9;

element.addEventListener('keydown', function(e) {
  if (e.key === 'Tab' || e.keyCode === KEYCODE_TAB) {
    if ( e.shiftKey ) /* shift + tab */ {
      if (document.activeElement === firstFocusableEl) {
        lastFocusableEl.focus();
        e.preventDefault();
      }
    } else /* tab */ {
      if (document.activeElement === lastFocusableEl) {
        firstFocusableEl.focus();
        e.preventDefault();
      }
    }
  }
});

Alternatively, you can add the event listener to the first and last items. I like the above approach, as with one listener, there is only one to undo later.

Putting it all together

With some minor changes, this is my final trapFocus() function:

function trapFocus(element) {
  var focusableEls = element.querySelectorAll('a[href]:not([disabled]), button:not([disabled]), textarea:not([disabled]), input[type="text"]:not([disabled]), input[type="radio"]:not([disabled]), input[type="checkbox"]:not([disabled]), select:not([disabled])');
  var firstFocusableEl = focusableEls[0];  
  var lastFocusableEl = focusableEls[focusableEls.length - 1];
  var KEYCODE_TAB = 9;

  element.addEventListener('keydown', function(e) {
    var isTabPressed = (e.key === 'Tab' || e.keyCode === KEYCODE_TAB);

    if (!isTabPressed) { 
      return; 
    }

    if ( e.shiftKey ) /* shift + tab */ {
      if (document.activeElement === firstFocusableEl) {
        lastFocusableEl.focus();
          e.preventDefault();
        }
      } else /* tab */ {
      if (document.activeElement === lastFocusableEl) {
        firstFocusableEl.focus();
          e.preventDefault();
        }
      }
  });
}

In this function, we have moved the check for tab to its own variable (thanks Job), so that we can stop function execution right there.

Further reading

See also Trapping focus inside the dialog by allyjs, Dialog (non-modal) in WAI-ARIA Authoring Practices and Can a modal dialog be made to work properly for screen-reader users on the web? by Everett Zufelt.


Originally posted as Using JavaScript to trap focus in an element on Hidde's blog.

Reply via email