Thanks Peter and Matijs for feedback on an earlier draft. Thanks do not imply endorsements.
Reading List
The most recent articles from a list of feeds I subscribe to.
How AI is made matters, confirms “Atlas of AI”
Artificial Intelligence (AI) is often described as smart, efficient and superhuman. That sounds quite nice, but it leaves out a lot of details. In her book Atlas of AI, Kate Crawford maps out the power systems, impact on society and stakes that come with deploying AI. That’s important, because we, and the systems that affect us, increasingly rely on these technologies.
There are lots of worthwile pursuits in AI: we can make computers understand our speech, teach medical devices to recognise irregularities in scans and analyse financial transactions for fraud. But there is plenty of applications that may not actually contribute all that much, or are downward hostile if you consider what could go wrong. Systems deployed to surveil workers, filter resumes or drive cars (sorry) may not be worth their damage.
Natural resources and people
The impact of AI starts with the raw materials of machines, Crawford explains. Computers need chips and those require rare earth materials like tin. It’s easy to forget about that: mining such materials isn’t without consequences. It badly affects workers (and their conditions when working in mines), their direct environment (pollution) and the earth at large. Once the chips exist and machines run, there is also an environmental impact. The data centres that compute our data require enormous amounts of gas, water and electricity. One of the largest US data centres uses 1.7 million gallons of water per day, is just one of the many examples Crawford provides. When a company says ‘our product uses AI’, their product also uses these raw materials, these data centres and this energy.
Reassuringly, most big tech companies have very green ambitions, like powering 100% of their data centres with renewable energy. This is good. But when not all of the world’s energy is renewable, we’ve got to be sure there are good reasons for any energy that is used. We’ve got to focus on using less of it, and endlessly processing ginormous amounts of data does not help.
Input data and classification
Crawford also writes about the basic ingredient for AI and specifically machine learning: data. To teach machines about a thing, we need to show them examples of the thing. To produce AI, a company needs to collect data, ‘mine’ it, if you will. Data, like text messages, photos, audio or video. It is often collected without the consent of the people involved, like the people who are in the photos. Crawford warns us against ‘the unswerving belief that everything is data and is there for taking’ (AAI, 93).
There are exceptions, like Common Voice, a Mozilla project that aims to ‘teach machines how real people speak’ by asking people to ‘donate their voice’. The teaching machines part is not unlike other machine learning projects, but the data collection is. Common Voice data is not taken from some place, people provide it consentually and for the very purpose of training machines. This is uncommon (pardon the pun).
And then there is the issue of context. When images become data, context is considered irrelevant, it no longer matters. They serve to optimise technical performance and nothing else. This data is now seen as neutral, but, explains Crawford, it is not:
‘Even the largest trove of data cannot escape the fundamental slippages that occur when an infinitely complex world is simplified and sliced into categories’ (AAI, 98)
Taking data out of context also doesn’t remove all context. A data set could affect privacy in unexpected ways. Crawford discusses a set of data from taxi rides, that may have looked quite innocent and useful from the outset. What could go wrong? Well, from the data, researchers could deduce religions, home addresses and strip club visits.
Classification
Machine learning requires input data to be classified. This is a fire hydrant, that is a bicycle. Some firms pay people through Mechanical Turk… there are workers who spend all day describing images. Other companies demand the work to be done for free by users who want to login (hi recaptcha).
Perhaps more important than who does the work are the philosophical problems of classification. Is there a fixed set of categories? Does everything neatly fit in them? Such questions have boggled philosophers’ minds for, literally, millenia. It’s safe to say most would say no to either question. Yet, AI firms seem more confident. They have to, as classification of data is at the heart of how machine learning works.
The harm caused by classification problems depends on the subject, but it is widespread. Maybe the AI that was trained with some wrongly classified fire hydrants won’t cause too much trouble. But there are endlesss examples of AI exhibiting awful bias, including towards race and sexual orientation. Bias in AI has been seen as a bug to be fixed, but it should be seen as a problem of classification itself. ‘Classification is an act of power’, says Crawford (AAI, 127). The categories can be forgotten and invisible once their work in the model training phase is over, but their impact remains.
You can’t fix bias by merely diversifying a set of data, Crawford says:
‘The practice of classification is centralizing power: the power to decide which differences make a difference’ (AAI, 127)
Sometimes researchers measure the wrong thing, because are constrained by what they can measure. For example, it’s easy to measure skin colour, but it’s not the thing to measure if you’re after understanding something about race or ethnicity. It doesn’t work that way… ‘the affordances of the tools become the horizon of truth’, Crawford concludes.
Open data collections often show categories that reinforce racism and sexism, and that’s just the sets we can see. Is there a reason to assume the sets of Facebook and Google avoided these problem?
Emotion recognition and warfare
The last two chapters of “Atlas of AI” consider two of the scariest answers to ‘what could possibly go wrong?’: AI tech for emotion recognition and warfare.
Me at The Glass Room in Leeuwarden in front of an art work about emotion recognition
Research in emotion recognition, Crawford writes, comes from a ‘desire to extract more information about people than they are willing to give’ (AAI, 153). It is already in use by HR departments to scan applicants and try and match their facial expressions to personality traits.
Can it work? Scientists including Paul Ekman tried to find out whether emotions are universal. Some were confident, but as of yet there is no consensus on which emotions exist and how they manifest (AAI, 172). Crawford explains there is growing critique doubting the very possibility that there is a clear enough relationship between facial expressions and emotional states (AAI, 174). It’s merely correlations, she says:
In many cases, emotion detection systems do not do what they claim. Rather than directly measuring people’s interior mental states, they merely statistically optimize correlations of certain physical characteristics among facial images (AAI, 177)
So, like with other AI systems, AI for emotion recognition relies on categorisation, which Crawford explained is flawed and an act of power, so it has many of the same problems:
[An analysis of the state of emotion recognition] returns us to the same problem we have seen repeated: the desire to oversimplify what is stubbornly complex, so that it can be easily computed, and packaged for the market.
And while this may work for companies selling AI systems, the end result is likely a lack of nuance:
AI systems are seeking to extract the mutable, private, divergent experiences of our corporeal selves, but the result is a cartoon sketch that cannot capture the nuances of emotional experience in the world.
Governments use AI for warfare, in the US the ‘Third Offset’ strategy includes leveraging Big Tech to create infrastructure of warfare (189). In “Project Maven”, the US Department of Defense paid Big Tech firms to analyse military data from outside the US, like drone footage, to build AI systems that could recognise ‘vehicles, buildings and humans‘ (AAI, 190)
Crawford describes a shift from debating whether to use AI in warfare at all, to, as former Google CEO Eric Schmidts said, whether AI would be able to ‘kill people correctly’ (AAI, 192)
‘Militarised forms of pattern detection’, Crawford explains, go beyond national interests when companies like Palantir sell their technology widely (eg local law enforcement and supermarket chains). They are focused not on enemies of the state, but ‘directed against civilians’. Machine learning based patten recognition tech is used to find ‘illegal’ immigrants to deport (‘illegal’, because no humans are illegal). In Europe, IBM was tasked to use data analysis to assign refugees a ‘terrorist score’, capturing the likelihood of them being a terrorist. This would be a bad idea if AI systems were 100% able to make such distinctions, and let’s be clear: they can never be. Because of the inevitability of bias and the very nature of categorisation. If even humans don’t universally agree about what constitutes a terorrist, and they don’t, how can any human categorisation we do successfully help teach machines?
When systems impact government decisions, they requires oversight, and there is very little of that, explains Crawford (AAI, 197). Without that, we risk making things worse, she says:
Inequity is not only deepened, but tech-washed, justified by the systems that appear immune to error yet are, in fact, intensifying the problems of overpolicing and racially biased surveillance
Summing up
Atlas of AI provides insight in what we give up when employing AI. There may be applications that are mostly helpful and little harmful, like language. There are applications that probably harm more than help, like emotion and warfare.
The book also reassures us that some things may not be as computable as they seem. Good categorisation is hard for humans and harder for machines. There is this fantastic tv show broadcasted on Dutch television called Zomergasten: a person of interest is interviewed for 3 hours straight on live television, and they bring in their favourite video fragments. YouTube may have state of the art machine learning powered recommendation engines, the suggestions these people bring often delight and surprise me more.
Above all, Atlas of AI will give you a fresh perspective that you can use when you read about the next thing AI is claimed to solve. Yes, the possibilities are promising. The features often cool. But we’ve got to keep our feet on the ground and assess AI technologies carefully.
Originally posted as How AI is made matters, confirms “Atlas of AI” on Hidde's blog.
A case for accessibility statements in app stores
In Apple’s App Store and Google’s Play Store, apps can have certain bits of meta data. Categories, localisations, price, privacy policy URL… but where is the meta field for accessibility statements?
I’m more of a web person than an app person, but my clients sometimes need to publish statements about their apps. Accessibility is important to a large part of app users, showed Dutch web consultancy Q42: in over a million app users, they saw 43% use accessibility settings.
If app stores would have a field for accessibility statements, this would be good for users, organisations and developers, plus it highlights accessibility as something to compete on.
What is an accessibility statement?
An accessibility statement shows users that you care about accessibility and helps them understand what the expected level of accessibility is on your website or application (see also: Developing an Accessibility Statement).
Sites and apps that fall under EU Directive 2016/2102 (Article 7, section 1) require an accessibility statement, in a specific format. Other sites and apps can also benefit from some form of accessibility statement.
What is included in an accessibility statement?
In their accessibility statement, website or application creators declare whether their app is fully, partially or not accessible. If one of the latter two, they also list what the known issues are. Commonly, there is also information about when the website or application plans to address each issue.
Accessibility statements also include evidence of their accessibility claims. Usually this is a full WCAG conformance evaluation report, following a methodology like WCAG-EM. But accessibility does not equal WCAG conformance, you might say. This is true, it is ‘just’ a baseline. Yet, there is no other document with such a widely agreed baseline, so it is best to use this one.
Accessibility statements can also contain a feedback mechanism (they must, if published in a country that has implemented that EU Directive). Feedback mechanisms let people find out how to request content in a format that is accessible to them (e.g. can I have that untagged PDF as a Word document?), and where to report accessibility issues.
App stores have fields for privacy statements
Both the Apple App Store and the Google Play Store have a meta field available for privacy statements.
In Apple’s ecosystem it seems to be part of App Infos, in Google Play, developers can submit it through the Google Play Console.
Google does this, because they care about user privacy. They write in their Google Play Safety Section blog post:
At Google, we know that feeling safe online comes from using products that are secure by default, private by design, and give users control over their data.
It’s framed as letting developers showcase how well they do on privacy:
This new safety section will provide developers a simple way to showcase their app’s overall safety.
I, for one, applaud this effort. User privacy is super important and explaining privacy features is helpful. A standard way to do it helps users find this information more easily.
I would say we can apply the same thinking to accessibility. As mentioned earlier, app users require accessibility. It is likely they will want to find accessibility information easily. App developers and designers work hard on ensuring their app’s accessibility, so why not give them a way to showcase that too?
Why app stores need accessibility meta info
Accessibility meta info in app stores would be good for several reasons:
- helps users understand the accessibility of an application better, and learn easily and consistently about where to file accessibility bugs or request accessible versions
- helps organisations comply better with EU Directive 2016/2102, which says they should display an accessibility statement
- helps developers show their commitment
- renders accessibility as yet another aspect to compete on for application developers, because many consumers will use this to make purchasing decisions
Conclusion
App stores can be a helpful proxy between users and app creators. The vetting mechanisms and categorisations (try to) filter out low quality. They also make it easier for users to find what they need. This is why they provide filters for user-oriented features. If that’s great for privacy, it would be great for accessibility, too. Please, app store makers? 🙏
The post A case for accessibility statements in app stores was first posted on hiddedevries.nl blog | Reply via email
A case for accessibility statements in app stores
In Apple’s App Store and Google’s Play Store, apps can have certain bits of meta data. Categories, localisations, price, privacy policy URL… but where is the meta field for accessibility statements?
I’m more of a web person than an app person, but my clients sometimes need to publish statements about their apps. Accessibility is important to a large part of app users, showed Dutch web consultancy Q42: in over a million app users, they saw 43% use accessibility settings.
If app stores would have a field for accessibility statements, this would be good for users, organisations and developers, plus it highlights accessibility as something to compete on.
What is an accessibility statement?
An accessibility statement shows users that you care about accessibility and helps them understand what the expected level of accessibility is on your website or application (see also: Developing an Accessibility Statement).
Sites and apps that fall under EU Directive 2016/2102 (Article 7, section 1) require an accessibility statement, in a specific format. Other sites and apps can also benefit from some form of accessibility statement.
What is included in an accessibility statement?
In their accessibility statement, website or application creators declare whether their app is fully, partially or not accessible. If one of the latter two, they also list what the known issues are. Commonly, there is also information about when the website or application plans to address each issue.
Accessibility statements also include evidence of their accessibility claims. Usually this is a full WCAG conformance evaluation report, following a methodology like WCAG-EM. But accessibility does not equal WCAG conformance, you might say. This is true, it is ‘just’ a baseline. Yet, there is no other document with such a widely agreed baseline, so it is best to use this one.
Accessibility statements can also contain a feedback mechanism (they must, if published in a country that has implemented that EU Directive). Feedback mechanisms let people find out how to request content in a format that is accessible to them (e.g. can I have that untagged PDF as a Word document?), and where to report accessibility issues.
App stores have fields for privacy statements
Both the Apple App Store and the Google Play Store have a meta field available for privacy statements.
In Apple’s ecosystem it seems to be part of App Infos, in Google Play, developers can submit it through the Google Play Console.
Google does this, because they care about user privacy. They write in their Google Play Safety Section blog post:
At Google, we know that feeling safe online comes from using products that are secure by default, private by design, and give users control over their data.
It’s framed as letting developers showcase how well they do on privacy:
This new safety section will provide developers a simple way to showcase their app’s overall safety.
I, for one, applaud this effort. User privacy is super important and explaining privacy features is helpful. A standard way to do it helps users find this information more easily.
I would say we can apply the same thinking to accessibility. As mentioned earlier, app users require accessibility. It is likely they will want to find accessibility information easily. App developers and designers work hard on ensuring their app’s accessibility, so why not give them a way to showcase that too?
Why app stores need accessibility meta info
Accessibility meta info in app stores would be good for several reasons:
- helps users understand the accessibility of an application better, and learn easily and consistently about where to file accessibility bugs or request accessible versions
- helps organisations comply better with EU Directive 2016/2102, which says they should display an accessibility statement
- helps developers show their commitment
- renders accessibility as yet another aspect to compete on for application developers, because many consumers will use this to make purchasing decisions
Conclusion
App stores can be a helpful proxy between users and app creators. The vetting mechanisms and categorisations (try to) filter out low quality. They also make it easier for users to find what they need. This is why they provide filters for user-oriented features. If that’s great for privacy, it would be great for accessibility, too. Please, app store makers? 🙏
Originally posted as A case for accessibility statements in app stores on Hidde's blog.
Solutionism
There is this internationally recognised physical proof of vaccination. Earlier this year, the Dutch health ministry said it won’t be used for registration of Dutch COVID-19 vaccinations. Why not? Well, an app was on the way.
The ministry was somewhat right. There wasn’t international agreement about use of that physical card for covid vaccinations. But there were other authorities, like the German government, that already accepted it. There was a use case. The EU Digital Covid Certificate, that the ministry put their money on, does sound promising. And it’s standardised across the EU.
But the dismissal of the paper proof, I feel, is a case of “solutionism”: you work on one solution for a problem, then forget there may be other ways to solve it, too. Why is that an issue? Multiple layers of solutions often work better. Especially if the dismissed is something physical and the new solution is some fancy new technology (see also: tech stacks on the web).
Don’t get me wrong, I like apps (though I prefer web apps). I work in technology. I like technology, and all the advancements it brings us. But as websites are best built from the simplest part of the stack outwards—HTML first, the rest later—something similar goes for including technology in society.
One of the powers of progressive enhancement is that it lets you cover use cases you didn’t know existed. Did your user’s JS download fail as they lost connection in a train tunnel? No biggie, you didn’t uniquely rely on JS, you didn’t assume it, it was just one of your layers.
A paper proof of vaccination could be a sensible baseline, on top of which more demanding technological solutions could be built. Demanding in the sense that they require digital literacy, smartphone ownership and what not. For some, the paper proof was their only way to enter the country they worked, especially if that was a country outside the EU. Paper proofs have their benefits and their problems, so do digital proofs.
IRL progressive enhancement is quite common when you think of it. You can board planes with paper boarding cards, but also with technology like QR codes and digital wallets. You can pay for a coffee with cash, card or phone. The variety serves diverse sets of people. Just like in web development, not dismissing the baseline lets us cover use cases we didn’t know existed. It is fragile, though: some manager somewhere probably has a fantasy about replacing everything with fancy tech and fancy tech only. (In fact… it’s quite common for shops not to accept cash where I live)
Just recently, the Dutch health ministry turned around and changed their guidance: the paper proof of vaccination will now be stamped for those who ask. In a different context, a service from that same ministry now offers a way to print a QR code to gain access to events following a recent corona infection, negative test or vaccination. Folks can use the app if they want to. Heck, the app is probably more convenient for many, but at least there is a paper fallback. Yay!
The post Solutionism was first posted on hiddedevries.nl blog | Reply via email
Solutionism
There is this internationally recognised physical proof of vaccination. Earlier this year, the Dutch health ministry said it won’t be used for registration of Dutch COVID-19 vaccinations. Why not? Well, an app was on the way.
The ministry was somewhat right. There wasn’t international agreement about use of that physical card for covid vaccinations. But there were other authorities, like the German government, that already accepted it. There was a use case. The EU Digital Covid Certificate, that the ministry put their money on, does sound promising. And it’s standardised across the EU.
But the dismissal of the paper proof, I feel, is a case of “solutionism”: you work on one solution for a problem, then forget there may be other ways to solve it, too. Why is that an issue? Multiple layers of solutions often work better. Especially if the dismissed is something physical and the new solution is some fancy new technology (see also: tech stacks on the web).
Don’t get me wrong, I like apps (though I prefer web apps). I work in technology. I like technology, and all the advancements it brings us. But as websites are best built from the simplest part of the stack outwards—HTML first, the rest later—something similar goes for including technology in society.
One of the powers of progressive enhancement is that it lets you cover use cases you didn’t know existed. Did your user’s JS download fail as they lost connection in a train tunnel? No biggie, you didn’t uniquely rely on JS, you didn’t assume it, it was just one of your layers.
A paper proof of vaccination could be a sensible baseline, on top of which more demanding technological solutions could be built. Demanding in the sense that they require digital literacy, smartphone ownership and what not. For some, the paper proof was their only way to enter the country they worked, especially if that was a country outside the EU. Paper proofs have their benefits and their problems, so do digital proofs.
IRL progressive enhancement is quite common when you think of it. You can board planes with paper boarding cards, but also with technology like QR codes and digital wallets. You can pay for a coffee with cash, card or phone. The variety serves diverse sets of people. Just like in web development, not dismissing the baseline lets us cover use cases we didn’t know existed. It is fragile, though: some manager somewhere probably has a fantasy about replacing everything with fancy tech and fancy tech only. (In fact… it’s quite common for shops not to accept cash where I live)
Just recently, the Dutch health ministry turned around and changed their guidance: the paper proof of vaccination will now be stamped for those who ask. In a different context, a service from that same ministry now offers a way to print a QR code to gain access to events following a recent corona infection, negative test or vaccination. Folks can use the app if they want to. Heck, the app is probably more convenient for many, but at least there is a paper fallback. Yay!
Originally posted as Solutionism on Hidde's blog.