Reading List

The most recent articles from a list of feeds I subscribe to.

On authoring tools in EN 301 549

Today I was at the IAAP-EU event in Paris, where we spent a morning workshopping and clarifying parts of EN 301 549, the procurement standard that is used in the Web Accessibility Directive.

I managed to get a spot in the group that focused on 11.8, the part of EN 301 549 that focuses on authoring tools. In this post, I'll share some insights from that session.

This post was written, as always, in personal capacity. And sorry, I don't have an HTML link for EN 301 549 (there isn't one currently, but there is a PDF).

What actually are authoring tools?

When my job was to promote the use of ATAG, I used to say authoring tools are “tools that create web content”. In EN 301 549, they are defined more broadly, beyond web:

software that can be used to create or modify content

(from EN 301 549, chapter 3: Definition of terms, symbols and abbreviations)

In other words, this definition includes web content as well as non-web content. The EN defines “non-web content” as “not a web page, not embedded in any web pages or used in the rendering or functioning of the page”. Examples of such content includes PDFs, Word documents and EPUB files. There's also a W3C document specifically about accessibility of “non-web content”, WCAG2ICT (which is informative, not normative) .

EN 301 549's authoring tool definition is followed by three notes, that all demonstrate the extent to which these tools exist:

  • authoring tools can have multiple users collaborating on content (this makes me think of tools like Sanity Studio or Google Docs where lots of people can edit content at the same time)
  • authoring tools could be a collection of multiple applications. For instance, some content goes through multiple applications before end-users access it
  • authoring tools could produce content that's used or modified later

In our group we quickly realised that there are indeed a lot of different authoring tools. The most obvious one is Content Management Systems (CMSes). Others that people mentioned are social media, online forums, video editing tools, WYSIWYG editors, and email clients. ATAG at a glance also mentions Learning Management Systems, blogs and wikis. It's a broad category, there are a lot of tools that can make (web) content.

The ATAG reference

ATAG, the Authoring Tool Accessibility Guidelines, is the standard that provides recommendations for both making authoring tools themselves accessible (part A), as well as the content they produce (part B). See my earlier post ATAG: the standard for content creation for an overview.

Conforming with EN 301 549 requires that all of our web pages meet all of WCAG (up to Level AA, see EN 301 549, clause 9.6). But it doesn't require ATAG. ATAG is merely mentioned as something that is worth reading for “those […] who want to go beyond [EN 301 549]” (in 11.8.0). In other words, there is no normative requirement to read it, let alone to apply it.

Still, some CMSes do. For instance, Drupal supports ATAG (part A and B) from version 8. Joomla, Wagtail and Craft CMS also have done a lot of work towards improving accessibility, see the W3C's List of authoring tools that support accessibility.

However, that doesn't mean ATAG isn't an incredibly useful standard for people who make and use authoring tools. In fact, it is. In 11.8.2 to 18.8.5, some ATAG requirements are explicitly added. These clauses are requirements, because they use “shall”, which in ETSI standards implies a “mandatory requirement” (say their drafting rules).

Note: that EN 301 549 requires these things, doesn't mean the law in European countries does. These laws often refer to specific parts of the EN, or refer to EN 301 549 specifically in relation to web content.

Note 2: clause 48 of the Web Accessibility Directive is interesting. It includes various points I'd love to see member states adopt:

  • “EU member states should promote the use of authoring tools that allow better implementation of the accessibility requirements set out in this Directive”
  • recommendation to “[publish] a list of compatible authoring tools” (as suggestions, so not requiring them specifically)
  • recommendation to “fund their development”

11.8.2: “enable” and “guide”

Out of the authoring tool requirements, we talked most about 11.8.2. It says:

Authoring tools shall enable and guide the production of content that conforms to clauses 9 (Web content) or 10 (Non-Web content) as applicable.

The key words to me are enable and guide. My personal interpretation of what that means, and maybe partially what I want it to mean:

  • enable: that tools have, for all types of content they can produce, functionality to create any necessary accessibility aspects for that type of content. For instance, if they let you add an image, they need to let you add a text alternative. There's a lot of grey area, because some very complex images might require linked descriptions that don't fit as alternative text. And what about types of content that the tool creator users aren't supposed to create? LinkedIn might say it only lets users create plain text with links, not headings. Is the fact that users will try and add faux bold text and whitespace instead of headings LinkedIn's fault or the user's?
  • guide: that tools tell authors about accessibility issues and help them get it right. I would love for more authoring tools to do this (see also my pledge in Your CMS is an accessibility assistant). Let authoring tools guide authors to more accessible content, this should have a large multiplier with fewer barriers across the web as a result.

What I like about the “guide” part especially: it addresses problems where they surface first. It lets authors fix accessibility problems before they ship to production, if the authoring tool guides them.

Other requirements

We didn't get to the other clauses, but they are interesting too:

  • preservation of accessibility information in transformation: a real example I dealt with: if you turn HTML into PDFs with PrinceXML, it's tricky to get it to take the text alternatives from your images and embed them correctly into the PDF.
  • repair assistance: there are CMSes already that tell authors when they're about to choose a new colour that would cause contrast issues (like WordPress' editor). Again, this lets authors fix problems before they exist in the produced content. Drupal has a list of modules that may improve accessibility.
  • templates: when templates are available, accessible ones should be available. Again, a focus on making accessible templates could have a huge multiplier effect, as they could be reused in many different places. WordPress has a list of accessibility themes

Summing up

It was fun to dive into one of the requirements specifically, and my hope is for two things. First, I'd find it useful for there to be more extended guidance on these clauses. They are fairly minimal and more concrete examples would help. Second, the testability could improve. What makes a template an accessible template (one that meets WCAG?), what sort of assistance is sufficient, and what sort of “guiding the author” is? And then my last open question would be: when does an authoring tool fall into or outside of the scope of the organisation trying to comply with EN 301 549? Is this when they use an authoring tool or only when they create one? To be continued!


Originally posted as On authoring tools in EN 301 549 on Hidde's blog.

Reply via email

On popover accessibility: what the browser does and doesn’t do

One of the premises of the new popover attribute is that it comes with general accessibility considerations “built in”. What does “built in accessibility” actually mean for browsers that support popover?

NOTE: except for this note, this whole post was co-written with Scott O’Hara (thanks Scott!). See also Scott's post, popover accessibility. Whether you're a developer, designer or accessibility specialist, hearing “accessibility is built in” probably makes you want to know what exactly is built-in. For popover, this actually changed quite a bit over time, after discussions at Open UI and with WHATWG. At first, the plan was to introduce a popup element with built-in roles. Later, an attribute ended up making more sense (more on that in the post). For that attribute, and thanks to the great effort of Scott and others, some “accessibility guardrails” have now emerged. And they shipped in most browsers. I hope this post helps you understand better what accessibility is “built-in” when you use popover, and what is not.

In this post

  1. Accessibility semantics
  2. What browsers do (aria-expanded, aria-details, group, keyboard accessibility)
  3. What browsers don't do
  4. Conclusion

Accessibility semantics

The “built-in” accessibility of popover is in the addition of guardrails: browsers try to improve accessibility where they can. These guardrails exist mostly in the form of browsers augmenting accessibility semantics. Before we get into what those guardrails are, let's clarify what that term means.

Many features of HTML have some amount of accessibility semantics associated with them - e.g., roles, states and properties. This is information that a web page exposes, which browsers then pass on to platform accessibility APIs. They do this, so that assistive technologies can build UIs around them (see: How accessibility trees inform assistive tech). These semantics are sometimes baked into native HTML elements. For instance, headings and lists have implicit roles (heading and list, respectively). Other elements, like the checkbox input type, have an implicit role as well as additional states and properties. Developers can use HTML elements with such “built-in” semantics. But they can also set, overwrite and augment accessibility semantics more directly in their HTML structure, using WAI-ARIA.

The thing with the popover attribute is that it doesn’t have a built-in role. After all, it’s not an element. Its purpose is to only add “popover behaviour”, as discussed in Popover semantics. In that sense, popover is a bit like tabindex or contenteditable. These attributes also add behaviour: tabability and editability behaviours, respectively.

A major reason for this choice is that there are a number of components that exhibit popover behaviours. Examples include menus, “toast” messages, sub-navigation lists of links and tooltips. You can use popover on a specific element, then it will get that element's role. Or you can use it with a generic element, and add a role that best matches what you are building.

So, while the default role is ‘generally’ not handled by the attribute (more on that later), there are other semantics (properties and states) that the attribute will expose. Browsers can take care of those with some degree of confidence.

What browsers do

There are two semantics that the browser should take care of when you use popover, and its associated popovertarget attribute. Additionally, there is some keyboard focus behaviour that may also be handled automatically, depending on the type of popover you are using.

The aria-expanded state

First, aria-expanded. This state is exposed on the element that invokes the popover, currently limited to buttons (for a variety of reasons that would require a whole other article to talk about - so this is all you get right now). When a popover is invoked by / associated with a button with the popovertarget attribute, the browser will automatically convey whether the popover is in the expanded (rendered) state, or if it is in the collapsed (hidden) state. This is implemented in Edge, Chrome, Firefox and Safari.

For the following example, the ‘heyo’ button will automatically convey whether its associated popover list is in the expanded or collapsed state, based on whether the popover list is invoked as a popover.

<button popovertarget=p>
  Heyo
</button><ul 
  aria-label="Heyo subpages" 
  id=p 
  popover
></ul>

Note: the state won’t be applied if script, rather than the declarative attribute, does the opening on click of any button (or any other element). Needless to say: it also doesn’t work if there isn’t an invoking button, for instance, and script invokes this popover (because in that case, there isn’t any expanding going on). Additionally, if you force open your popover using CSS display block, then it will not be rendered as a popover - and thus the button will still communicate that the “popover” is in the collapsed state. Also, if you’re doing that - forcing your popover open with CSS - maybe you have some things you need to reconsider with your UI.

The aria-details relationship

When the popover doesn’t immediately follow its invoking button in the accessibility tree, browsers are supposed to create an aria-details relationship on the popover’s invoking button with its associated popover. At the time of writing, this is implemented in Chrome, Edge and Firefox.

For instance, in the following markup snippet an implicit aria-details relationship will be made with the button that invokes the popover, because the button and the popover are not immediate siblings in the accessibility tree.

<button popovertarget=foo>something</button>

<p>...</p>

<div role=whatever popover id=foo>...</div>

Similarly, an aria-details relationship will be made with the next markup snippet too, because even though the popover and its invoking button are siblings, the popover is a previous sibling to the invoking element, and it might not be understood which element is the popover, because it doesn’t immediately follow the element that invoked it.

<div role=whatever popover id=foo>...</div>

<button popovertarget=foo>something</button>

In contrast, the next two examples have no aria-details association because that would be unnecessary. For the first, the popover is the immediate next sibling in the accessibility tree (note divs are generic and often ignored when they do not provide information important to accessibility). For the second, the button is a descendant of the popover, so browsers do not need to tell users that the button they are interacting with is about the context they are within. That’d be silly.

<!--  
  example 1: 
  popover immediate sibling in acc tree 
-->
<button popovertarget=m>something</button>
<div class=presentational-styles-only><div role=menu popover id=m>...</div>
</div>

<!-- 
  example 2:
  button descendant of popoover
-->

<dialog popover id=d>
  <button popovertarget=d>close</button></dialog>

For more information on how aria-details works, check out the details in the ARIA spec.

Note: aria-details is often confused with aria-describedby. That makes sense, “details” and “descriptions” are very similar. However, these two properties expose different information. aria-describedby takes the associated element’s content, flattens it into a single text string, and exposes that as the ‘description’ or ‘hint’ of the element which the attribute is specified. In contrast, aria-details only informs a user that there is additional information about the current element. That might not seem useful, until you know that screen readers largely provide quick-keys to navigate to and from that associated element which provides the details.

At the time of writing, navigating into the referenced content using quick-keys is supported by JAWS and NVDA (release notes), but not VoiceOver.

Here’s a quick demo of that with JAWS 2023 in Edge 124. JAWS lets us jump to the details content if we press Alt + Ins + D:

In NVDA (2023.3.4), tested with Edge 124, it works slightly differently: when you press the shortcut (NVDA + D), we don't jump to the details content, but it is read out after the shortcut is pressed:

(see demo on CodePen; note: announcements and shortcuts depend on the user's settings, versions, etc)

In the following demo, you can see how the aria-details relationship works between popovers and their invokers (in JAWS 2023 with Edge 124):

(video contains screenshot of code, see demo on CodePen)

In summary: the aria-details association is not announced by JAWS when we focus the invoking button for the first time. This is because the corresponding popover is hidden, so the association isn't made yet. After we open the popover, JAWS announces the button's “has details” association while it is open, to hear it we navigate away and back. This is also how it works in NVDA, which, in addition, also requires you to switch to forms mode to actually hear the relationship announced.

Warning: even if the aria-details association is implemented, it may not be completely ironed out in how the UX behaves for people. For instance, there isn't currently a way for users to find out about the details relationship once it is established, like when the popover opened. It requires for the user to move away from the button and return to it, at which point the relationship is announced. Maybe it would be helpful if the browser would fire some kind of event, to let AT like JAWS know that an element representing details has appeared.

We mention this not to deter you from using popover or to indicate that anyone is doing something “wrong” here. Rather, this is a relatively new feature that people still need to figure out some of the UX kinks around. Feedback is welcome, and to help ensure the best UX is provided, please reach out to the necessary browsers / AT with your thoughts.

The group role

As mentioned above, popover can be used on any element, including elements that don’t have a built-in role, like div. But even without a role, it’s likely that the contents of a popover form some kind of meaningful whole. This is why in Chrome, Edge and Firefox, a role of group is automatically added to popovers if they would otherwise have no role, or a role of generic (for instance, divs and spans).

The group role is added, so that assistive technology can have the option to expose the boundaries of the popover that is being displayed. This can be important to users, because a popover is a behavior and visual treatment of its content. How is one to know where such content begins or ends if it doesn’t have boundaries to expose?

It’s important to know that an unnamed group is often ignored by screen readers. This is because otherwise the Internet would be riddled with unhelpful “group” announcements. (See also why Webkit made the decision to remove list semantics from lists that have been styled to not look like lists. These topics are related). Here though, it again comes down to what assistive technology wants to do. By exposing the group role for the popover, now the element can be named by authors, which will force the group role to be exposed in most cases. Then, if AT decided they want to do something special for popover groups, they now have the opportunity to do so.

Keyboard accessibility

One more aspect of “built-in accessibility” that browsers do for your popover, is take care of some keyboard behaviors.

Moving focus back to invoking element

Edge/Chrome, Firefox and Safari will all return focus to the invoking element when you close the popover (only if you are inside of it). This is useful, because if focus was on an element inside the popover, the default would be to return focus to the start of the document. Various groups of users would get lost, increasingly so on pages with a lot of content. Moving focus back to the invoking element helps ensure people can quickly return to what they were doing, rather than spending time having to re-navigate to where they think they were last.

Popover content in tab order

Desktop browsers also do something else: they add the popover content into the tab order just after the invoking button. Even if that’s not where the popover is in the DOM.

Imagine this DOM structure:

<button popovertarget=p>Open popover</button>

<p>content… content… <a href="#">link 1</a></p>

<p>content… content… <a href="#">link 2</a></p>

<div popover id="p"><a href="#">link 3</a></div>

When the popover opens, and you press Tab, you might think you’d jump to “link 1”, the next interactive element in the DOM. Except, in desktop browsers, you will jump to “link 3” instead. The browser basically moves the popover content’s position in tab order to just after its invoking button. This takes it out of its expected position in the tab order. That improves the user experience, because it is likely that upon opening the popover, users will want to interact with its contents.

Keep in mind: browsers adjust the Tab order for instances like this, but they don't adjust the placement of the content in the accessibility tree. This is why the aria-details association was implemented. This special Tab order behavior helps ensure logical focus order for keyboard accessibility. However, we should still strive to make sure our popovers come after the invoking element in the DOM.

But since there will be times where the exact location of the popover in the DOM may be out of one’s control, this behavior is still quite welcome. For instance, if the popover happens to be far away in the DOM, having to go through the rest of the document before reaching the popover would be a nuisance. It would be highly unexpected and unwanted to have to navigate through all other focusable elements in the DOM, prior to the popover one just opened. WCAG 2.4.3 Focus Order requires focusable elements to receive focus in an order that “preserves meaning and operability”. This special browser Tab restructuring helps ensure that requirement can be met.

What browsers don’t do

We can keep this one short: the browser will not do anything apart from the behaviours listed above. Browsers will not add behaviors based on which elements you use or which role you add to them. The popover attribute is merely a first step for us to build new components.

Conclusion

The popover attribute provides a starting point for building popover-like interactions on the web. Browsers don't magically make your components accessible, that's not a thing. But there are some specific keyboard behaviours included with popover, as well as these semantics:

  • In specific cases, browsers set the aria-expanded state and/or set the aria-details relationship on the invoker.
  • Browsers apply the group role to the popover content if it doesn’t have a role of its own, so that assistive technologies can expose their boundaries.

Browser support note: at the time of writing, Safari only sets aria-expanded, not aria-details. It also doesn't add a group role fallback.


Originally posted as On popover accessibility: what the browser does and doesn’t do on Hidde's blog.

Reply via email

Breadcrumbs, buttons and buy-in: Patterns Day 3

Yesterday I spent all day in a cinema full of design system nerds. Why? To attend Patterns Day 3. Eight speakers shared their knowledge: some zoomed out to see the bigger picture, others zoomed in on the nitty-gritty.

It was nice to be at another Patterns Day, after I attended the first and missed the second. Thanks Clearleft and especially Jeremy Keith for putting it together. In this post, I'll share my takeaways from the talks, in four themes: the design system practice, accessibility, the technical nitty-gritty, and communication.

building that says Duke of Yorks est 1910, on the top in the middle is a clock, on the left two legs wearing high heels The day's venue: the Duke of York's cinema

The design system practice

Design system veteran Jina Anne, inventor of design tokens, opened the day with a reflection on over a decade of design systems (“how many times can I design a button in my career?”) and a look at the future. She proposed we find a balance between standardisation and what she called “intelligent personalisation”.

On the other hand, we aren't really done yet. There are so many complex UX patterns that can be solved more elegantly, as Vitaly Friedman showed. His hobbies include browsing postal office, governmental and e-commerce websites. He looks at the UIs they invent (so that we don't have to). Vitaly showed us more breadcrumbs than was necessary, including some that, interestingly, have feature-parity with meganavs.

Yolijn van der Kolk, product manager (and my colleague) at NL Design System, presented a unique way of approaching the design system practice: the “relay model”. It assigns four statuses to components and guidelines, which change over time on their road to standardisation. It allows innovation and collaboration from teams with wildly different needs in wildly different organisations. The statuses go from sharing a need (“Help Wanted”), to materialising it with a common architecture and guidelines (“Community”), to proposing it for real-life feedback (“Candidate”), to ultimately standardising an uncontroversial and well-tested version of it (”Hall of Fame”).

Jeremy presenting in front of the patterns day logo on screen Jeremy Keith hosted the event

The technical nitty gritty

Two talks focused on design problems that can be solved with clever technical solutions: theming (through design tokens) and typography (through modular scales with modern CSS).

Débora Ornellas, who worked at Lego (haven't we all used the analogy?), shared a number of great recommendations around using design tokens: to use readily available open source products instead of inventing your own, publish tokens as packages and version them and avoid migration fatigue by reducing breaking changes.

Richard Rutter of Clearleft introduced us to typographic scales, which, he explained, are like musical scales. I liked how, after he talked about jazz for a bit, the venue played jazz in the break after his talk. He showed that contrast (eg in size) between elements is essential for good typography: it helps readers understand information hierarchy. How much depends on various factors, like screen size, but to avoid having to maintain many different scales, he proposed a typographic scale that avoids breakpoints, by using modern CSS features like clamp() (or a CSS locks based alternative if you don't want to risk failing WCAG 1.4.4). I suggest checking out Utopia to see both strategies in action.

Accessibility, magic and the design system

The power of design systems is that they can make it easier to repeat good things, such as well-engineered components. And they can repeat accessibility, but there is a lot of nuance, because that won't work magically.

Geri Reid (seriously, more conference organisers should invite Geri!) warned us about the notion that a design system will “fix” the accessibility of whichever system consumes it. Sounds like magic, and too good to be true? Yup, because what will inevitably happen, Geri explained, is that teams start using the “accessible” components to make inaccessible things. Yup, I have definitely seen this over and over.

To mitigate this risk of “wrong usage”, she explained, we need design systems to not just deliver components, but set standards and do education, too. At which point the design system can actually help build more accessible sites. For instance, if components contain ARIA, it's essential that the consumers of those components know how to configure that ARIA. In other words, design systems need very good documentation. Which brings me to the last theme: common understanding and why it matters.

Mitigating misunderstandings with better communication

The theme that stood out to me on the day: design system teams commonly have to deal with misunderstandings. Good communication is important. What a cliché, you might say, that's like anyone in any job. Yes, but it's specifically true for our field: design systems force collaboration between such a broad range of people. That includes similar-discipline-collaboration, like between designer and developer. Débora explained what can happen if they don't work together closely, or if breaking changes aren't communicated timely.

But it's also about wider collaboration: a design system team also needs to make sense to other departments, that have specific requirements and norms. Including those that don't really grasp all the technical details of front-end componentisation, like marketing or (non-web) brand teams, or the people who can help sponsor or promote the project. Samantha Fanning from UCL focused on this in her talk on “design system buy-in”, which she had a lot of useful tips about. She recommended to involve other departments early to do “co-design” rather than presenting (and surprising) them with a finished product. She also shared how it helped her to add design system work as extra scope onto existing projects, rather than setting up a design system specific project.

In her talk, Mary Goldservant of the Financial Times, also touched on the importance of communication. She shared how they got feedback from stakeholders and manage expectations, while working on a large update to Origami, the Financial Times' brilliantly named design system, that includes lots of changes, like multi-brand and multi-platform support.

Wrapping up

It was nice to hang out with so many like-minded folks and learn from them. A lot of the challenges, tools and ideas resonated. Once again, I've realised our problems aren't unique and many of us are in similar struggles, just in slightly different ways.


Originally posted as Breadcrumbs, buttons and buy-in: Patterns Day 3 on Hidde's blog.

Reply via email

“AI” and accessible front-end components: is the nuance generatable?

Companies are rushing to add generated AI capabilities to their products. Some promise to produce front-end components for you. Is that even possible, given the nature of accessibility and the nature of generative AI? And is it desirable?

The short answer is no, to both questions. The risk: that our rush to technlogical solutions comes at the expense of users.

To find out why, let's consider: how is the process of building an accessible component different between humans and machines? And what are the ethics of our tendency to reach for technological solutions?

The human approach

Let's look at the differences in process first. A human who writes accessible front-end code, writes (mostly) HTML elements and attributes based on:

  • their understanding of specs and how they work together (including HTML and WAI-ARIA)
  • what they intend to convey
  • what they know about how assistive technologies interpet the code they write
  • knowledge of browser and assistive technology support
  • looking up the syntax and applying it correctly

(Leaving aside all the useful templating languages and orchestration libraries)

So they translate what they or their designer counterparts want to exist into something that works according to those intentions in a browser. Intentions are a key word here. Conveying author intentions accurately and understanding user needs is essential to accessibility.

They are likely also involved in writing CSS for things like colours, typography and spacing, which can all affect whether websites have barriers for users. And add JS for interactive stuff, managing state(s) and more.

The machine approach

A tool that generates code using language models basically predicts lines of code based on statistical likelihood, a bit like an autocomplete. If the output happens to be high quality, that's, in principal, coincidental. A systems' success rate can be (and is usually) increased by training models specifically with very good examples. In some cases, systems get very close to high quality, because they have enormous amounts of training data. For accessibility, this data is hard to get by—most of the web has accessibility problems: what we can see in the automated tests of the WebAIM Million is just the tip of the proverbial iceberg.

While humans map intentions to interactive content and apply their understanding in the process, LLMs don't have intentions or understanding. They just output blobs of text that matches some input best. I think this is fascinating, impressive and often akin to magic. And the output can look (and sometimes be) production ready and high quality. But it's unsurprising that the output can also contain problems. And as reasonable web developers, we've got to look at the problems we create.

To make this more concrete, let's look at v0, Vercel's LLM-based code generator product that the Vercel CEO announced as:

v0.dev produces the kind of production-grade code that we'd want to ship in our own @vercel products.

(From: tweet by Guillermo Rauch, 12:15 AM · Sep 15, 2023)

I mention this specifically, because I think claims like “production-ready” are an overestimation of the technology and an undervaluation of the need for humans. Which has real-world effects on people.

When I read “production-grade”, I read “accessible”. I had a brief look at the first six components in the ”featured“ section of v0, I found WCAG violations and accessibility barriers in each.

Examples of barriers in each
  • in math learning app example: buttons marked up as links, progress indication that was only visual with no text alternative, heading marked up as div
  • In kanban board example: list of items not marked up as list, column headers with low contrast, overlapping text on zoom
  • in accessibility helper example: overrides existing shortcuts, icons not marked as decorative
  • in terminal UI: buttons not marked up as buttons
  • in pricing table: icons not marked up as decorative, button with insufficient contrast
  • in music player example: various buttons not marked up as buttons, some buttons not available with just keyboard, buttons without accessible names

This isn't a full conformance audit, I just listed the first few things that stood out. I don't mean this as an attack, I just want to show exactly how common accessibility issues in LLM output are.

You might say it's not all terrible, and that's true. I also found lots of markup that makes things accessible, for instance headings that are useful to navigate in various tools, good contrasts and useful + valid ARIA. But that same level of accessibility often exists on websites that didn't involve LLMs. Lots of websites have fairly useful headings, good contrast on many elements and valid ARIA. It's the bits where those things aren't in place where web UIs create barriers for people with disabilities. It's the nuance that matters.

The self-confidence issue

In Can generative AI help write accessible code?, Léonie Watson looks at the output of three other generative AI tools (ChatGPT, Bard and Fix My Code). Like me, she found things that weren't terrible, things that were actually helpful and things that constituted accessibility issues. But Léonie points out a different problem: these tools thend to present themselves as authoritative. Regardless of whether they are. She explains:

Other than the generic statements about the need to check its responses, none of these generative AI tools gives any hint that their answers may not be correct or provides any recommended resources for checking".

In contrast, most good blog posts and resources about accessible coding have a lot of nuance in them. They usually can't recommend one authoritative solution that is guaranteed to work at all times (what definition of “work” would they use?). And that reflects making accessible interfaces in general. It involves rabbit holes. There are generally multiple ways and multiple least bad outcomes to balance between.

Ok, but can LLMs at least be partially useful?

Maybe the problem of authoritativeness could be solved. We could tune these tools to output responses that don't present as mansplainy know-alls. But that still leaves us with other problems: inaccessible suggestions, lack of intention and understanding and lack of innovation.

Falsehoods and hallucinations

LLMs give inaccessible suggestions, as demonstrated in the examples I shared above and in the examples in Léonie's post. If these falsehoods are a consequence of training data, that could be improved with different training data (emphasis on “in theory”). But it's also due to “hallucinations”, a problem inherent to the tech that research shows is inevitable. They make wrong stuff up. Output may be nonsensical. At the expense of users. That can't possibly be an improvement to the status quo: even without “AI” there are plenty of accessibility tips on the web with specific bugs or issues, automating the addition of falsehoods and hallucinations to the mix seems absurd.

Lack of intentions

LLM tools don't have intentions, and intentions are necessary for (most) “accessible coding”. In his post Why doesn't AI work for producing accessible code, Alastair Campbell explains accessibility is not an average. That makes it incompatible with statistical methods to make suggestions.

Lack of innovation

While there are lots of open source component libraries, many UI patterns and their implications haven't been invented yet. Their assumptions dearly need testing. Relying on LLMs for suggestions means relying on (remixed) existing knowledge, so it's unsuitable for making new patterns accessible.

These three reasons make me wonder: are LLMs useful at all in assisting us in building accessible front-end components? If there is a use, it's probably in helping developers discover resources that do contain nuance, not in code suggestions. Maybe there are also uses outside of component code, but that's for another post (see also Aaron Gustafson's Opportunities for AI in Accessibility).

The focus

Probably something for a post on its own, but I feel I should mention here: a focus on trying to find a “fix” or “solution” for accessibility constitutes a misunderstanding of what accessibility is about. When we make websites, the onus is on us to make them accessible. If we want to try and outsource that work to a tool (that we can't trust), we put the onus on disabled users (see also: disability dongles).

As Adrian Rosellli wrote in AI will not fix accessibility, accessibility is about outcomes, not outputs:

Accessibility is about people. (…) When we target output versus outcomes, we are failing our friends, our family, our community, and our future selves. We are excluding fellow humans while we try to absolve ourselves of responsibility. (…)

Eric Bailey posted:

Thinking AI will "solve" accessibility is a bad frame stemming from a technoableist mindset.

The industry seems to me hoping for a magic, binary solution (…) Personally, I'd look to the social model of disability for guidance here: what exactly are we looking to "fix" and why?

In summary

Is the nuance that accessibility usually needs generatable? I think not. Not reliably, anyway. If you take away one thing from this post, I hope it's this warning: LLM-based tools can't be the magic bullet for writing accessible component code that they promise to be. Because nuance, understanding and conveying intent are inherent to accessibility, LLMs cannot be of great help with the accessibility of component code. In addition, they hallucinate inevitably and tend to pose as authoritative while outputting (occassional, but real) falsehoods. The latter can be dangerous and is likely to come at the expense of users.

My suggestion to developers who want help building accessible components? Use a design system that's well tested with people, that is well documented, and that (at least) attempts to capture the nuance. Or get involved in building one. Not everyone wants to do this nuanced and precise work, not every organisation has the budget. That's fine, but let's not suggest it can be automated away, magically. Let's value the human effort that can make web products actually great.


Originally posted as “AI” and accessible front-end components: is the nuance generatable? on Hidde's blog.

Reply via email

Sharing links

The amount of content on the web is so large, that it's tricky to find the stuff worth reading. One of my strategies is to follow people I trust and read what they share. For anyone with interests similar to mine, I've opened a Links section on this website, too (with it's own RSS feed).

My plan is to publish no more than a couple of links per day (if any). They will mostly be related to technology and/or ethics. I have taken inspiration from many others, like Jeremy's Links section. Mu-An inspired me to use Shortcuts as a tool to create links and notes.

Why?

The reason I want to publish links on this site is mostly for selfish reasons. I've posted links on social media for a long time, but in the black box of algorithms, it's hard to recover them after time passes. I want to at least try and have some sort of system for organising and archive my interests (tags… I'm adding tags).

I also want to try and experiment with shorter, quicker posts.

How: low treshold publishing with a Shortcut

I read mostly on the go, when on public transport or waiting for an appointment. This means I usually am not logged into a CMS or near a computer where I can do version control. This site doesn't use a CMS, but I have (Markdown) files in version control that I populate a static site from. To appear on my site, links shared would ultimately need to exist as Markdown files in a specific folder.

This is what I wanted for my link sharing system:

  • Very minimal effort
  • Should work on all devices
  • Should draft a note with both currently selected text and a link to the page, named after that page
  • Should also include the current date in the draft and let me title the note
  • Should place my draft somewhere that I can move to my site quickly

What I ended up with is an Apple Shortcut that takes the current text selection, page name and page URL on a given page in Safari and creates a blob of text with current date, selection and link prefilled. When I run it while in Safari, a popup opens with something like this prefilled:

--- 
tags: []
date: 2024-01-10
---

> // Selected text

(From: [Name of the page](link to the page))

I can then write some context around the link, optionally add a comma-separated list of tags and then save the file. The filename becomes YYYY-MM-DD-.md, where I can write a title for the post after the date. My site generator grabs that title from the file name.

At the time of writing, I haven't figured out how to then get this file in git, so I save it in a specific folder, requiring me to manually drag it into my site whenever I do reach a computer. That works fine for now, I don't write that many anyway.

Summing up

I'm looking forward to continue doing this for a while, and hope the low treshold publishing will make it so easy that I actually will. Check out /links to find out what I've posted so far.


Originally posted as Sharing links on Hidde's blog.

Reply via email