Reading List

The most recent articles from a list of feeds I subscribe to.

Ableist interactions

This week, a product launched and claimed to generate “production ready” code. But it also generates code with accessibility problems, which contradicts “production ready”. When someone called this out publicly, a community showed itself from its worst side. What can we learn?

I'll state again, I wrote this to share learnings around community respondes to a concern about accessibility issues. Because these kinds of replies are common and it's useful to have context. I don't want to add fuel to the issue, which is why I left out links to individual tweets and people.

I do want to call out Vercel, a business with a large voice in the developer community, which I do at the end of the post.

The “production ready” claim

I’ll start by elaborating on my first point. “Has accessibility issues” contradicts “production ready”, for three reasons:

  1. equal access to information is a human right
  2. most organisations have legal requirements to make accessible products
  3. it can cost companies money if people can’t use their products (you wouldn’t stop every fifth customer from entering your shop).

I will note it was an “alpha” launch, but the words “production ready” were used and not nuanced (not in marketing and not in the actual tool; a warning banner could go a long way). Fair enough, maybe they want to look at accessibility later (a personal pet peeve though: I recommend shifting left instead, doing accessibility earlier is easier).

The company could have made different choices. If it is known that accessibility is problematic, maybe the product could have come with a checklist that helps people avoid some of the most common issues? Or some kind of warning or banner that explains nuances the “production ready” line? These are choices to be made, to balance between what makes the product look less good and what harms end users.

Ableism

Many of the responses were ableist: they discriminate or contain social prejudice against people with physical or mental disabilities. A key point to make here is: don't feel offended if you or your comment is called ableist. Instead, listen and learn (seriously, it's an opportunity). The system is ableist, on top of which individuals make comments that can be called ableist. We (as a people) need to identify and break down that system, but also, people can individually learn: everyone has a degree of ableism (like they have some degree sexism and racism). I know I do. I've been learning about accessibility for about 15 years and still learn new things all the time (same for sexism, or racism, etc, these are all things need regular introspection; and they are related, see also intersectionality).

Learning from the responses

Below, I'll list some responses I found problematic, and try and explain why. I'm hoping this is helpful for people who want to understand better why accessibility is critical, and why accessibility specialists point this out.

  • “can’t expect them to make everything perfect, especially in the alpha release” - I think it's fair to have expectations from a company that is a large voice in the web development community
  • “Here's a better framing”, “Why the agressive tone?”, “Why are these people so insufferable? (…)” - this shifts the question about accessibility to one about how the person asking for equality phrases their feedback (this is tone policing, a common derailment tactic)
  • “🤮 being an insufferable dick to well-meaning, well-intentioned people is not going to work for your cause, no matter how good of a cause it is” - this seems to suggest that inaccessibility is ok as long as the intentions are good (that is ableist; equal access cannot be bought off by good intentions only. It is actual equality that is required)
  • “Paying a six figure engineer to add features only 1% of your user base needs only makes profit sense after, idk, 100K active users? [screenshot of ChatGPT to prove the number]” - it’s not only about profit sense, it’s also about ethical sense and legal sense. If you want to focus on profit only: about 20% of people has a disability (says WHO). Also, almost all people will develop disabilities in some form throughout their life while they age.
  • ‘You are complaining about a WYSIWYG editor not being accessible to the blind---do you make similar complains about sunsets and VR headsets?’ and ‘I don't think anyone using this site needs accessibility’- this is a fundamental misunderstanding of how people with disabilities use the web. Yes, blind people use WYSIWYG editors (and so do people with other disabilities, which is why creators of these tools care, see the accessibility initiatives for tools like TinyMCE). See also Apple's videos on Sady Paulson, who uses Switch Control to edit videos or on how people use tools like Door Detection and Voice Control.
  • ‘Then what is the argument for accessibility, if not screen readers or search engine crawlers?’ - again, there are many more ways people with disabilities use the web, and beyond permanent disabilities (as mentioned, about 20% of people), there are people with temporary impairments (from broken arms to word) and situational impairments

Some responses were particularly hostile and personal. “I'm shocked that you're unemployed ..🤯🤯😅”, “Okay, Karen”, “(…) She wants attention”, “No matter how much you shame Vercel, they don't want you. They never will”, “Go accessibility pimp else where (sic) and pretend that others give a shit”, “[you are] being an insufferable dick”. These are all unacceptable personal attacks.

If you work at Vercel (this was relating the v0 product), please consider speaking up (silence speaks too) and/or talking with your community about how accessibility is viewed and how people in the community interact. The quotes in this post are all real quotes, from people defending Vercel. To his credit, the CEO gave the right example with his response (”Thanks for the feedback”)

Wrapping up

So, in summary: the “production ready” claim and lack of nuance about what that means is problematic. Pointing it out got responses I'd call ableist, plus a few responses that were plain hostile. All of this reflects badly on the community.

It's not new that accessibility advocates get hostile responses to reasonable requests (or when doing their job). But it's been a while since I've seen so many of those responses, so I wanted to take the opportunity to write down some common misunderstandings.


Originally posted as Ableist interactions on Hidde's blog.

Reply via email

Co-organising Design Systems Week 2023

For the Dutch government, I'm co-organising the third edition of a virtual week-long design systems event, as part of my role in the NL Design System core team. Will it be interesting? Yes!

At NL Design System, we work with a lot of government teams to ultimately try and make a “greatest hits” of their components. Heavily simplified: we want to find the best front-end components/guidelines/examples in use across government, test them (for accessibility and usability) and then publish them for wide reuse. That's a long, but (hopefully) very fruitful journey, that can result in widely-agreed upon solutions and avoidance of some common design system pitfalls.

That's not really how design systems used to work. Design systems came a long way from pattern libraries for developers who need to copy/paste HTML to much better thought-out systems with communication and support protocols, advanced theming, versioning and solid accessibility guidelines. Over time, the promises we make have probably also evolved.

Promises vs reality

My favourite promise of design systems is the opportunity to try and do high quality front-end work and then spread the result across lots of projects. So, like, you could get the component right and build it accessibly, with a usable API, excellent guidance, and so fort. Good things you could then spread around. Other promises of design systems include cost savings through efficiency and improved user experience through consistency. But realistically, promises remain promises until they are realised (as those who work on design system teams will probably be well familiar with).

That's not to say designs system promises are too good to be true. They do often come true. Just look at what some teams out there are doing! But there's a lot to say about approaches, benefits and potential pitfalls for design systems teams. How does everyone do it? Because while the work could be made to sound easy, it often isn't. This is partly why we're organising Design Systems Week (the third edition this year): we want to hear from others about their successes, learnings and challenges. Or… peak inside other teams, basically. And when I say “we”… I should say the team already ran two editions, I'm just helping out with the third.

Design Systems Week

So, Design Systems Week 2023 is coming, in the first week of October! The program is starting to shape up nicely. We'll have speakers from across the Dutch government, such as the Chamber of Commerce and various city governments. New in this year's edition is that we also wanted to hear from people from outside the Netherlands, in government and private sector.

So far, we've announced (among others):

And there's a few more coming that I can't wait for the team to announce.

We know people are busy and don't necessarily have time to watch virtual events all day, so we've designed the sessions to be 20-25 minute “snacks” that you can catch between meetings (live via Teams (government), or watch via the published records afterwards).

I'm really looking forward to this. If you want to join us, you can sign up for individual sessions or check out the main event page.


Originally posted as Co-organising Design Systems Week 2023 on Hidde's blog.

Reply via email

It's pretty rude of OpenAI to make their use of your content opt-out

OpenAI, the company that makes ChatGPT, now offers a way for websites to opt out of its crawler. By default, it will just use web content as it sees fit. How rude!

The opt-out works by adding a Disallow directive for the GPTBot User Agent in your robots.txt. The GPTBot docs say:

Allowing GPTBot to access your site can help AI models become more accurate and improve their general capabilities and safety.

I get the goal of optimising AI models for accuracy and capabilities, but I don't see why it would be ok for these “AI” companies to just take whatever content they want. Maybe your local bakery's goal is to sell tastier croissants. Reasonable goal. Now, can they steal croissants from other companies that make tasty croissants, unless those companies opt out? I guess few people would answer ‘yes’?

Google previously got into legal trouble for their somewhat dubious practice of displaying headlines and snippers from newspaper's articles. It seems reasonable to reuse content when referring to it, at least headlines, most websites do that. Google does it with sources displayed and has links to the original. ChatGPT has neither, which makes their stealing (or reusing) especially problematic.

Taking other people's writing should be an opt-in and probably paid for (even if makers of AI don't think so). The fact that this needs to be said and isn't, say, the status quo, tells me that companies like OpenAI don't see much value in writing or writers. To deploy this software in the way they have, shows a fundamental misunderstanding of the value of arts. As someone who loves reading and writing, that concerns me. OpenAI have enormous funds that they choose to spend on things and not other things.

It is in the very nature of LLMs that very large amounts of content are needed for them to be trained. Opt-in makes that difficult, because it would mean not having a lot of the training content required for the product's functioning. Payment makes that expensive, because if it's lots of content, that means it would cost lots of money. But hey, such difficulties and costs aren't the problem of content writers. OpenAI's use of opt-out instead of opt-in unjustifyably makes it their problem.

For that reason alone, I think the only fair LLMs would the ones trained on ‘own’ content, like a documentation site that offers a chatbot-route into its content in addition to the main affair (an approach that is still risky for numerous other reasons).


Originally posted as It's pretty rude of OpenAI to make their use of your content opt-out on Hidde's blog.

Reply via email

“AI” content and user centered design

Large language models (LLMs), like ChatGPT and Bard, can be used to generate sentences based on statistical likeliness. While the results of these tools can look very impressive (they're designed to), I can't think of cases where the use of LLM-generated content actually improves an end user's experience. Even if not all of the time, LLM output is often nonsensical, false, unclear and boring. Hence, when organisations force LLM-output on users instead of paying people to create their content, they don't center users.

User centered design means we make the user our main concern when we design. When I recently told a friend about this concept, explaining my new job is at a government department focused on centering users, they laughed in surprise. “This is a thing?”, they asked. “What else would you make the main concern when you design?” It made little sense to them that users had to be specifically centered.

If you work in tech, you probably saw projects center other things than users. Business needs, the profit margin, search engines, that one designer's personal preference, the desire to look as cool as a tech brand you love… and so on. Sadly, projects center them instead of users all the time. Most arguments I heard for using LLMs in the content production process quoted at least one of these non-user-centric reasons.

Organisations are starting to use or at least experiment with LLMs to create content for web projects. The hype is real and I worry that, by increasing nonsense, falsehoods and boredom, LLM-generated content is going to worsen user experiences across the board. Why force this content on users? And what about the impact of LLM-generated content beyond individual websites and user experiences: it's also going to pollute the web as a whole and make search worse (as well as itself).

None of this is new, we've had robot-like interactions way before LLMs. When the tax office sends a letter that means you need to pay or receive money, that information is often buried in civil servant speak. When Silicon Valley startup founders announce they were bought, they will mention their “incredible journey”. When lawyers describe employment, customer service phone lines pronounce “your call is important to us” (a great read, BTW)… this is all to say that, even without LLMs, we're used to people that sound more robotic and less human. They speak a lingo.

Lingo gets in the way of clarity. Not just because it feels impersonal and boring, it is also made-up, however brilliantly our prompts will be ‘engineered’. Yes, even if it's sourced—or stolen, in many cases—from original content. That makes it like the lingo humans produce, but much worse. Sure, LLM-generated content could give users clarity, except in a way that's only helpful if the user already knows a lot about the thing that is clarified (so that they can spot falsehoods). This is the crux and why the practical applicability of LLMs isn't nearly as wide as their makers claim.

I can see how a doctor's practice / government department / bank / school could save money and time by putting a chatbot between themselves and the people. There are benefits to one-click-content-creation for organisations. But I don't see how end users could benefit, at all. Who would prefer reading convincing-but-potentially-false chatbot-advice to a conversation with their doctor (or force the bot on others). Zooming out from specific use cases to the wider ecosystem… aren't even those who shrug at ideals like centering humans worried that LLMs-generated content wipes out the very “value” capitalists wants to extract from the web (by enshittification)? I certainly hope so.

Addendum: I didn't know writing this post that OpenAI's CEO Sam Altman literally wrote he looked forward to “AI medical advisors for people who can't afford care”. From his thread on 19 February 2023:

the adaptation to a world deeply integrated with AI tools is probably going to happen pretty quickly; the benefits (and fun!) have too much upside.

these tools will help us be more productive (can't wait to spend less time doing email!), healthier (AI medical advisors for people who can’t afford care), smarter (students using ChatGPT to learn), and more entertained (AI memes lolol).

(…)

we think showing these tools to the world early, while still somewhat broken, is critical if we are going to have sufficient input and repeated efforts to get it right. the level of individual empowerment coming is wonderful, but not without serious challenges.

He talks about “individual empowerment [that] is wonderful”, I think it's incredibly dystopian.


Originally posted as “AI” content and user centered design on Hidde's blog.

Reply via email

Joining CSSWG

This week I joined the CSS Working Group (CSSWG) as an Invited Expert. I'm super grateful for this chance to try and make myself useful in a group whose outputs shaped so much of my professional interests (they make CSS!).

I'm somewhat nervous about this, but also not completely new to CSS or web standards. My background with CSS is that I've been a long time fanboy of the language, and a keen follower of new developments through events (9 times CSS Day attendee of which 2 as a speaker). The CSSWG folks I've met so far are very friendly, no exceptions. My background with standards is that I've participated in the Open UI Community Group for just over two years, and worked as W3C Staff to promote accessibility standards, help simplify developer documentation and build standard-related tooling like the ATAG and WCAG-EM Report Tools. As such, I am experienced with some of the W3C process.

Despite not being completely new, I've yet to figure out my focus and where I could help. The CSSWG does a daunting amount of work (see the charter), and there are certain specs and features I'm especially interested in, like the ones close to Open UI. I think I will start with attending the telecons, listen and learn. Ultimately, I hope to make myself actually useful, maybe help with demos and content (like explainers or explanatory blog posts or talks), or by sharing a web developer's perspective. And opinions, maybe!

I'm excited for this opportunity, many thanks to Tantek, Miriam and others for their encouragement. I look forward to involve in standards outside accessibility and, yeah, try and make myself useful 🙃


Originally posted as Joining CSSWG on Hidde's blog.

Reply via email