Reading List

The most recent articles from a list of feeds I subscribe to.

Running for the AB (2): meet the candidates

AC members can vote via the voting form (AC-only). They can also contact me for questions. I'd be happy to chat more about anything mentioned in this post or my statement.

What are the challenges facing W3C as an organization in the next two years, and what skills and interests will you bring to the AB to help with addressing them?

As a web standards organisation, we're not immune to general issues that the technology sector faces. I see:

  • technology that's too rapidly rolled out, in ways that don't benefit everyone, sometimes while inclusion budgets are explicitly cut.
  • private interests that are are increasingly put before the greater good, resulting in more web activity happening inside silos and walled gardens, usually not in the interest of users.

Both issues apply to the web and work against the W3C Vision and the Ethical Web Principles. We must help address them, with a focus on what threatens the web, its users and the wider web community. With my academic background in ethics, professional experience in inclusion and in writing (see my blog) and personal passion for the indieweb (see my site), I hope to contribute to that effort. How? By giving input towards conflict resolution (like W3C Council) and by contributing to new documents or updates to existing ones.

The AB has been discussing its role in the community and how it may need to be updated to reflect the new governance structure. What do you think the AB's role is, and what do you believe the AB needs to do to fulfil it?

There is the role as described in the process: to provide “ongoing guidance to the Team on issues of strategy, management, legal matters, process, and conflict resolution”, as well as tracking and helping resolve issues, managing Process, and participating in the W3C Council.

As a newcomer in W3C governance, I don't have a strong opinion of whether any of that should change… a lot of what I know and think about the AB was influenced by what I learned reading various AB issues on GitHub and talking to current AC and AB members (thanks everyone who made time to talk to me). From that, my working conclusion is that a large part of what the AB does is inspiring effectiveness of the W3C, in the specific ways W3C and (especially) its members want it to be effective, by producing documents like the Vision, Code of Conduct and Process, and aligning W3C work with others, like the Ethical Web Principles and Privacy Principles.

With these documents in existence, I would like to see the focus shift more towards trying to fulfil the consensus we got to in those documents. I think we should find ways talk to policy makers more, and advocate more within our own communities. And then take our advocacy to outside W3C Groups, as widely as possible, including into the larger web development community.

What do you think of the W3C Strategical Roadmap reported on at AC 2025? Which parts in it do you think W3C should prioritize in the coming year or 2?

The roadmap seems solid to me, the priorities are already a list of what we should prioritise, that can effectively work together.

I personally think W3C should prioritise the impact framework, global outreach and policy engagement… three priorities out of six may seem like a lot, but I believe these particular three can strengthen one another, and be effective together.

Which AB priority projects do you want to personally spend time working on?

I'd love to work on the Vision and on collaboration with TAG and Team. There seems to be room for more priority projects, including wider policy alignment.

Apart from that, I could also help with other priorities, including AC Forum Facilitation. I would also want to follow Process more closely from the sidelines, to learn, and try and prepare for maybe getting involved with it in a later stage.

What do you as an AB candidate believe W3C should be doing about the impact of AI or other emerging technologies on the Web?

The W3C needs to continue to ensure we are not surprised by emerging technologies and prepared to adapt to changes, I welcome efforts like the Team's AI & the web report and the APAs's work on accessibility and XR. I am convinced we can. Contrary to how some of these technologies are marketed, they have not come out of nowhere, especially if we zoom out from the latest news. For instance, the term ”artificial intelligence” was defined in the early 1950s. Many technologies grouped under the term have developed over decades. It's impossible to predict the future, but feasible to stay on top of developments and their possible impacts on the web.

While we cannot influence whether or how companies or people will use these technologies, we can and should help shape consensus on the right way to do so, aligned with our values and vision. If that sounds vague, what I mean is we can recognise harms early and find agreement on what mitigation looks like, so that these technologies, or their web based manifestations, truly are safe, interoperable and designed for the good of all people. We can also collaborate with other SDOs on the specific threats, like the IETF's AI Preferences group.

Our output should be such that policy makers can make use of them in their work, like they were able to do with other W3C standards like the Web Content Accessibility Guidelines.

Thanks

That's all my answers, thanks for reading and, if you're able to, considering your votes in this election. As mentioned, feel free to get in touch with any questions.


Originally posted as Running for the AB (2): meet the candidates on Hidde's blog.

Reply via email

Kandidaatstelling voor AB

Mijn naam is Hidde de Vries, specialist standaarden bij Logius (deel van de Rijksoverheid). Hierbij vraag ik om je stem voor mijn verkiezing in het Advisory Board van het W3C.

Mijn enthousiasme voor het web gaat zo'n 20 jaar terug, toen ik op de middelbare school mijn eerste website maakte. Wat een mooi platform! Sinds die tijd werkte ik onder andere als webontwikkelaar en specialist digitale toegankelijkheid, waarbij ik vaak bruggen bouwde tussen beide beroepsgroepen.

Bij het W3C was ik al eens onderdeel van het Team (bij WAI, 2019-2022), waar ik onder andere aan informatie over toegankelijkheid werkte en aan de WCAG en ATAG Report Tools. Momenteel ben ik actief in de Open UI Community Group, verschillende toegankelijkheids-gerelateerde Working Groups en de Web Sustainability Interest Group.

Als ik word verkozen wil ik:

  • interactie met het AB verbeteren: door meer contact te zoeken met Members en de bredere community, om te zorgen dat hun inbreng terugkomt in ons advies, en door te zoeken naar consensus rondom bestuurlijke vraagstukken tussen Board, TAG, Members en verschillende werkgroepen.
  • het W3C aantrekkelijker maken: mensen komen nu al naar W3C om aan webstandaarden te werken. Maar we kunnen een grotere groep bereiken als we onboarding verbeteren. Ik wil uitzoeken wat mensen nu tegenhoudt om te beginnen en met die informatie verbeterslagen maken.
  • een nieuw geluid inbrengen: ik ben niet echt nieuw in de W3C community, maar ik ben vrij nieuw in het bestuurlijke deel. Ik hoop dat die verse blik waardevol gaat zijn voor een AB dat op zoek is naar een nieuwe identiteit en missie.
  • de gebruiker centraal stellen: bijna te voor de hand liggend om te vermelden, gebruikers staan immers al centraal in de W3C Vision, maar in de praktijk kan het gebeuren dat zaken als toegankelijkheid, internationalisatie en privacy het onderspit delven. Er is kennis van zowel techniek als gebruikers nodig, en ik wil die van mij inzetten om processen te verbeteren.

Wat ik wil bijdragen, in willekeurige volgorde:

  • specialisme in webtoegankelijkheid (langere tijd) en duurzaamheid (sinds kort). Ik leer snel en duik graag in nieuwe dingen.
  • gewerkt in verschillende sectoren die het web nodig hebben en gebruiken: overheid, een browsermaker en de maker van een authoring tool (CMS).
  • veel ervaring met communiceren over standaarden richting technische communities, met name de web developer community. Ook meerdere congressen georganiseerd.
  • eerdere ervaring bij W3C, als Invited Expert, Member en Team.
  • recente ervaring met en diepe technische kennis van web development.
  • achtergrond in filosofie en (kort) kunstmatige intelligentie, ik neem dus het perspectief van de geesteswetenschappen mee, niet alleen een technische blik. Ik kan daardoor ook goed kwesties van meerdere kanten bekijken.

Voor vragen: stuur me gerust een berichtje op W3C Slack of email me op hidde@hiddedevries.nl.


Originally posted as Kandidaatstelling voor AB on Hidde's blog.

Reply via email

Running for the AB

My name is Hidde de Vries, accessibility standards specialist at Logius (part of the Dutch government). I would appreciate your support of my running for the W3C's Advisory Board.

AC members can vote via the voting form (AC-only).

I fell in love with the web ever since I built my first website in high school, almost 20 years ago. What a cool platform to build for! Since then, I worked professionally as a web developer and accessibility specialist, often building bridges between these fields.

At the W3C, I've been a member of the Team (at WAI, 2019-2022), where I worked on accessibility guidance, authoring tool accesibility, the WCAG Report Tool and the ATAG Report Tool. I'm currently a participant in the Open UI Community Group, various accessibility-related Working Groups and the Web Sustainability Interest Group.

If elected, I want to:

  • help improve what engagement with the AB looks like, by doing more outreach to Members and the wider community, to ensure their input is represented in our advice, and by working to find consensus on governance issues between Board, TAG, Members and groups.
  • make W3C more attractive; people go to the W3C for web standardisation today, but we can reach more potential if we improve onboarding. I want to find out what stops people from getting started or engaging with our work and use that to make improvements.
  • bring fresh perspective on the future; though I can't say I'm new to the W3C community, I am new to most governance aspects of it. My hope is this fresh perspective can be an asset to an AB that is orienting towards a new identity and mission.
  • advocate for users, the primary constituents of the web. Almost too obvious to call out, as users are central to the W3C vision, and who else would we advocate for? In practice, though, user needs around accessibility, internationalisation and privacy are easily overlooked. They require matching up (deep) technical specifics with user needs. I want to use my experience with both to help improve the process.

What I'd like to bring to the AB, in no particular order:

  • specialism in web accessibility (for long) and web sustainability (as of recently). Also a fast learner happy to dive into things I'm not specialised in.
  • worked in a broad range of sectors that use and need the web: public sector, including government, a browser vendor and an authoring tool vendor.
  • proven record of communicating about standards development to tech communities, the web developer community in particular (see my blog and talks). Also co-organised developer conferences.
  • previous experience at W3C, as an Invited Expert, Member and Team.
  • hands-on experience and deep technical knowledge of web development
  • background in Philosophy and (briefly) AI, meaning I could bring a humanities perspective and not just a technological view, and that I am trained to view issues from many angles.

Any questions: feel free to reach out via W3C Slack or email me on hidde@hiddedevries.nl.


Originally posted as Running for the AB on Hidde's blog.

Reply via email

Is “ethical AI” an oxymoron?

It depends on who you ask. But the current wave of generative AI has unpleasant side effects that are hard to ignore: large-scale copyright infringements, environmental impact and bias.

(In this post, I'll use “AI” as a shortcut for generating or modifying content, like text, code, images and videos, and asking questions using LLM-based systems.)

Our industry seems divided on generative AI. Some are loudly excited, others use it begrudgingly. Some hear from management that they must use more AI. We probably all use LLMs secondhand, as they are under the hood of tools like translation services and auto captioning. In this post, I'm specifically talking about chatbots that generate responses to prompts.

They seem to save people time. At the same time, many people avoid the tools altogether. Their results are often underwhelming, as they hallucinate and yield cringy results. And then there are the aforementioned ethical side-effects.

As someone who likes AI in principle, I did some in university, the promise of “ethical AI” sounds good to me. And necessary. But “ethical” is not a very precise qualification. It's a promise of sorts.

How can we tell if the promise is fulfilled? If this isn't clear, the phrase is a free for all, that marketing departments will like more than the rest of us.

What's ethics?

Ethics, and I'm going to simplify it a bit here, studies how to act and what to do. It is practiced as a field in academia, but everyone can practice it, really. You survey the facts about a given situation and weigh them against your course of action.

You can personally weigh them asking yourself: would I like to be on the receiving end of these specifics? Do I think the world at large benefits more than it is harmed? Am I sure?

There's no standard ethical answer. Who is ok with varies (see also: politics).

Corporations, including AI vendors, do this too, but they don't always match their statements with their actions. The ethics departments are usually different from the sales or product departments.

With that in mind, let’s survey the facts about generative AI. A lot needs to happen to make it, but I'll focus on three things: training models, sourcing electricity and crafting a system that can respond to prompts with even-handed and reasonable-looking answers.

Training language models

What's involved

To make generative AI work, engineers automatically show the system enormous amounts of labelled examples of stuff, such that the machine is at some point able to start recognising patterns in the stuff, without being told what’s what. That way, the machine ‘learns’ to represent the stuff, plausibly. For instance, to reproduce text, the system needs to be fed very large amounts of text. The amount is key here, generally won't work without a lot of so-called “training data”.

Major LLM providers train their tools on content they have taken from the web, including copyrighted material. Meta, for instance, trained their Llama 3 model on Library Genesis data, which includes millions of pirated academic papers and novels, from Sally Rooney's to Min Jin Lee. Open AI, the company that built ChatGPT, told a UK House of Lords investigation committee that they cannot “train their AI models without using copyrighted materials”.

The amount of copyright-free data is not large enough to build language models as large as they need, and the AI vendors can't afford or arrange to pay all the copyright holders whose content they want. OpenAI operates at a loss with ChatGPT Pro. So the companies preemptively take what's not theirs, while attempting to have governments change the rules.

There is a lot to unpack regarding these practices, but maybe my biggest concern is with the perceived value of all that original content. Art in particular. Novels, paintings, records and cinema, unlike LLMs, are the bones of our society. They can move us, make us think, take a stance and capture complex human experiences in ways that resonate with us, humans. There’s something ironic about taking that work, in order to make systems that don’t do these things (creativity cannot be computed).

OpenAI CEO Sam Altman, who's part of the subset of tech folk that is happily still on Twitter, is worried too. He said they will introduce limits. On March 27, he posted, one of his first posts since he tweeted the new president “will be incredible for the country”:

it's super fun seeing people love images in chatgpt.

but our GPUs are melting.

we are going to temporarily introduce some rate limits while we work on making it more efficient. hopefully won't be long!

He shows concern with server costs, and remains silent about stealing art.

They then ‘finetune’ the models. This work is often outsourced to low wage workers in the global south, and this work traumatised some of these workers.

Making it more ethical

Training models, as it happens today, is not ethical, in three ways:

  1. large amounts of content are taken without permission.
  2. the work of writers, musicians, photographers, illustrators and other artists is treated as merely input, rather than treasured as valuable in itself. That's bad for the artists, their audiences and the world at large.
  3. some workers are traumatised and underpaid during finetuning.

For ethical training, it seems to me that vendors should at least train models without stealing, and finetune models without traumatising and underpaying workers.

Regarding valuing art: the tools can simply refuse to resemble original works. To the prompt “Make this in the style of Van Gogh“, an ethical model could return “No, not happening, go make art of your own.”

They do this, to some extent, and the “refusal” is actually mentioned in OpenAI's system card for GPT-4o on native image generation (2.4.2). But when all my social media feeds are full of images that do resemble original art, and even Sam Altman's profile picture is in Studio Ghibli style, that's just promises. Promises that are clearly and obviously not fulfilled.

Using electricity

What's involved?

Generative AI also won't work without a lot of electricity. Most of it is used during the ‘learning’ phase, and some more during the usage stage. The International Agency expects electricity demand from AI-optimised data centres projected to more than quadruple by 2030.

Aside from electricity usage for doing the actual training, there is also the collection of data for the training sets that adds to electricity usage. See Xe Laso's account of how Amazon's AI crawler makes their server unstable, Dennis Schubert who said 70% of the work his server does is for AI crawlers, Drew DeVault of SourceHut who spends 20-100% of his time in a given week on mitigating hyper-aggressive crawlers and Wikimedia, who saw a 50% increase of traffic due to crawlers on their images. More traffic = more emissions.

The emissions increase associated with AI contributes to faster global warming, because not all energy is green all the time. Period. Sometimes, in some locations, there is 100% green energy, which has very low emissions. There are data centres that take advantage of this. Also, Amazon, Microsoft, Alphabet and Meta are the four largest corporate purchasers of renewable energy, the latter three even buy 100% of their energy renewable. Some of these big tech companies are also using AI to increase their efficiency, which is nice.

But as this paper by Alexandra Sasha Luccioni, Emma Strubell and Kate Crawford, that compares the “AI bad for climate” and “AI actually good for climate” perspectives concludes:

We cannot simply hope for the best outcome.

The consequences of global warming are too big to ignore.

Making it more ethical

The electricity usage of AI is not ethical in these ways:

  • data centre electricity usage has exponentially (!) increased, due to AI model training and usage, with insufficient consideration for utility (use cases beyond “it's super fun”).
  • AI crawlers that gather content to train on are increasing the energy consumption of individual websites.

To make it more ethical:

  • We should all use AI only if we are sure that it is strictly necessary and more helpful than less energy-intensive technologies. Not “just because we can”, which seems to be the current mode.
  • Vendors should develop their models more efficiently and reduce emissions at scale (they are doing both, which is great!).
  • AI crawlers should respect websites's preferences around being crawled (many websites and services don't want to be crawled) to avoid increase of their electricity usage. IETF is looking at standardising building blocks for consent.

Representing the world accurately and fairly

The last thing generative AI needs, is to train models specifically in ways that lead them to accurately and fairly represent the world. Today's Large Language Models struggle with that. Without exceptions, they have a bias problem.

Bias disproportionately affects minorities, it further marginalises the marginalised. Practical examples include front-end code generators that output inaccessible code, automated resume checkers that statistically prefer men, medical diagnosis systems that perform worse on black people and chatbots that assume female doctors don't exist. Some extremist LLM-vendors, like Elon Musk’s X, specifically train their models to include more bias, in attempts for them to be ‘anti-woke’ (whatever that means, if not a lexical tool for hatred). Even AI vendors that try hard to reduce bias, struggle to keep it out, as it is ingrained in large parts of the training data.

There can also be a problem of bias in people employing AI. For instance, when a product manager who is not disabled or familiar with the disabled experience decides AI-generated captions will do. Sidenote: I also hear from disabled friends and colleagues that various AI-based tools remove barriers, that is great.

More ethical implications

There are more aspects to generative AI that pose ethical questions than I can cover here.

Other surveys of the ethics of generative AI include the Ethical Principles for Web Machine Learning by the W3C's Machine Learning Working Group, and UNESCO's Ethics of Artificial Intelligence.

Summing up

If you're not sure if or how you want to use generative AI, I hope this post provides a useful overview of possible considerations. Everyone will need to decide for themselves how much AI usage they find OK for them. I do hope we'll stop pretending not to know how the sausage is made.

For me personally, the current ethical issues mean that I default to not using generative AI. To me, “ethical AI” is currently an oxymoron. Still, “defaulting to” is the keyword here. I don't think all use of AI is problematic, it is specifically the throwing AI at things “just because we can” that is at odds with the ethical considerations I laid out in this post.

Your mileage may vary. And in case it was not clear from my post: I recognise everyone's needs are different. People share use cases that they have and I didn't think of all the time. Sometimes utility could outweigh downsides. I'm curious to hear yours.

We're not currently weighing the ethics of AI with its utility, not everyone got the memo. If you are able to, please help in talking about AI ethics with your colleagues. Whichever part of it makes sense to you in this post, or in any of the sources listed in this post or elsewhere. Cheers!


Originally posted as Is “ethical AI” an oxymoron? on Hidde's blog.

Reply via email

Tag, you're it

People have been blogging about questions about blogging. Steve tagged me. There are multiple projects I haven't procrastinated on nearly enough, so let's go!

Why did you start blogging in the first place?

When I went on a one year trip and had been asked by worried family members to provide regular updates on my whereabouts. I ended up really enjoying capturing moments, surprises and stories.

Later, I loved the idea of having a place of my own to put my (mostly) tech-related thoughts, learnings, notes… to refer back to at a later time. By making it a public place I could also send links to others and get feedback.

What platform are you using to manage your blog and why did you choose it? Have you blogged on other platforms before?

My blog is basically a lot of Markdown files in a folder. I currently use Eleventy to turn that into HTML files and wrap the rest of a website around the blog. I keep a very minimal approach to adding other things to the mix, no preprocessors,

Previously, I used the Perch CMS.

I try not to focus too much on process and tools, at the expense of actually writing, I wanted my process as simple as possible. To have very little between my writing and the publishing.

Various times, I resisted the urge to make my own blogging platform.

How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog?

Most posts start off as Markdown files in iA Writer. This is usually on a phone, waiting for a bus or something. They end up in my text editor whenever I'm close to publishing and ready to add images or other illustrations.

I have a folder for blog posts that contains many drafts, that I return to every once in a while. Many never make it to live, some do after a while.

When do you feel most inspired to write?

It's a cliche, but this is usually when I am doing something else. During a walk, swim or shower, for instance.

I can also feel inspired to write if I figure something out that seems useful for later, or for others. Or when I am working on something that puts me in the position to have information that isn't readily available elsewhere… not secrets, but the kind of thing that could be hard to piece together.

Do you publish immediately after writing, or do you let it simmer a bit as a draft?

It stays in draft until it's finished, but when it's finished, I publish it almost immediately, except for when I need to ask help from others to factcheck.

What are you generally interested in writing about?

The web platform, and front-end development, but also the ethics of technology, and things I learned about web accessibility. Sometimes about something more personal.

What’s your favourite post on your blog?

This is so hard to say.

The one I return to most is Console logging the focused element as it changes, because I'll never remember.

Maybe as a favourite, I'll nominate this one: The web is fast by default, let's keep it fast, as I remember being baffled at the time that something that could be 2KB was 1.3 MB. I still struggle to accept that organisations have $reasons, however real, to overcomplicate their websites to such degrees.

Who are you writing for?

Similar to Steve, I write mostly for myself. It does depend on the post though… some of my posts are intended for a broader audience.

Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature?

When I find the time, I want to expand my log part of my blog. On my regular blog, I want to add more ways to comment. I will try hard not to update the architecture, because it's working fine.

Next?

The format dictates I nominate other people to write a post like this. I'd like to tag Hui Jing, Bruce, Sara and Kilian.


Originally posted as Tag, you're it on Hidde's blog.

Reply via email