Craig and Allison in Vancouver on Craig’s first-ever trip overseas

Issues for accessibility, inclusion, AI, and agentive technology

Stephen Collins
Lab Notes
6 min readSep 2, 2019

--

“Ok, Google. Play ABC News Radio.”

“That’s not available right now.”

Wash. Rinse. Repeat. Usually, three or four times until Craig gives up in frustration. All the more so because it worked yesterday.

AI is great until it’s not.

Google Assistant and other AI-based voice assistant agentive tech like Apple’s Siri can’t always understand Craig. You see, Craig is a person with multifactor disabilities; he has Usher Syndrome, meaning he’s bilaterally deaf and has a deafness-related speech impediment (he hears okay with hearing aids) and is 99% blind due to retinitis pigmentosa, able to detect the presence of light in one eye only. Thanks to a birth injury, Craig also has learning difficulties, is on the autism spectrum, and experiences peripheral neuropathy, struggling with fine motor skills.

Until late 2017, Craig had spent his entire 52 years living at home with his parents. When they both died within six months of each other, Craig was forced to uproot his previously simple life and move to Canberra to live with me and his sister (and my partner), Allison.

When he came to live with us, Craig had never touched a computer and had never had a mobile phone of his own. Today, he uses a computer in the form of Google Assistant and Siri every day. He uses them to control smart lighting, to access news, recipes, and music, and to change the environmental controls in our house.

He also uses an iPhone, principally through using Siri and some VoiceOver gestures as his learning challenges and vision impairment limit just how much he can learn to use the vast array of features it would otherwise offer using gestures and a wider array of apps. He has no detailed concept of what’s on the screen, or what any feature might do, as he struggles with conceptual and consequential imagining as well as distance and direction.

As capable as he is, he still has to rely on Alli or me to trigger a lot of his technology, such as his much-loved AFL app on his phone so he can “watch” the footy; it just doesn’t work well either with VoiceOver or with gesture-based controls. Same goes for getting the news to play as we eat breakfast every day.

These AIs and similar tools aren’t as good as they need to be to help him consistently. By no means is it as bad as classifying black people as gorillas. But it represents a gap in training, and it’s a gap that limits or excludes people like Craig.

And so back to the start of this story. As good as this first generation of AI-driven voice agents are, and as well as they work a lot of the time, they’re still largely designed to listen to and help the able-bodied. I think that’s a massive missed opportunity.

Unequivocally, the killer app for voice assistants is improving the lives of people with disabilities and the elderly, by combining with other agentive technology including smart home devices.

One of my goals as a service designer — someone whose capabilities include identifying, researching, synthesising, and resolving complex problems — is to be a part of ensuring AI, voice assistants, and agentive technology include a deliberate focus on improving the lives of people with disabilities. When I talk to people involved in the development of these kinds of tools, it’s too often the case that the possibilities they hold for people like Craig simply haven’t been thought of.

We’re really only in the first generation of these tools. They still address fairly limited, simple needs, even for fully-able people. It’s great that I can ask Google Assistant to open my garage and turn my heating up while it sets the evening light tone to a gentle, yellower light, but these are conveniences rather than world-changing.

When I think about what future generations of these AI-driven tools might offer to people with disabilities, I wonder why we aren’t already focussed on this problem space. Imagine a future where a profoundly disabled person, trapped in a physical form that constrains their interaction with the world could be aided by both physical and software tools, augmented by AI and intelligent agents that act as their proxies in physical interactions and cyberspace. These tools would not replace the person in these situations, but be driven by them, acting for them in a world designed for the able-bodied.

As far as we’ve come in the journey of inclusion for people with disabilities, it’s very apparent the world remains largely indifferent to them, if not openly hostile.

Of course, it’s not that simple. The companies with the wherewithal to develop and improve voice assistants, build serious AI, and create agentive technology are largely ones whose business models rely on untrammelled gathering of our personal data. And to date, they’ve proven that power is something they aren’t always suited to having.

In this environment, we are the product. Craig (despite the power granted him by using the AI-driven tools he already does) is the product.

Do we want the most vulnerable in society — those unable to make the necessary decisions that protect themselves from being manipulated by exposing their data — to be subject to data gathering on this scale? It’s a big enough challenge to those of us who can make those decisions.

For everyone who owns a modern smartphone, our selves are no longer constrained to the physical forms we inhabit. Our cyborg selves extend information and build meaning across the hyperconnected world; sharing, interacting, seeking. For most of us, we do this cognisant of what we share, and with at least a passing awareness of what we give away in terms of privacy to the vast keiretsus that now own and share the data we give away.

For Craig, not only can he not easily comprehend what data he might be giving away, he can’t easily comprehend the consequences. Does that mean he should be prevented from having a better life by interacting with AIs that improve things for him? That he shouldn’t use an iPhone or a Google Home device? I would argue definitively no. Like the rest of us, he has every right to a life made better by AI.

There’s a lot of work to be done.

The vast majority of the work will be in massive changes to the ethical and privacy frameworks that underpin the regulation of tech: extending things like GDPR to have global equivalents; enshrining the right to be forgotten in law; ensuring data is both owned and held by the individual, not companies, and shared only when it brings benefit to the person and not when it increases profits for surveillance capitalists; and in ensuring that data collection and use is focussed on protection of the individual,.

We can also improve the AIs themselves to work better for people with disabilities. In particular, designing AIs in an inclusive way. Done right, for the benefit of all, including those of us unable to self-manage our interactions in a way that comprehends consequence, they have the power to make better futures for us all.

That requires more research with the right kinds of people. With exposing the hardware designers and software developers to people not like them in a massive “you are not your user” exercise. It requires rich synthesis and understanding how people with disabilities might need new or different touchpoints in their journeys using these tools. We’ll need to understand what people with disabilities might want assistance with (because it must be their choice) that AI and agentive tech might help with.

As a designer, I want to make a difference for Craig and people like him, doing what I can to improve AI to include everyone.

--

--

Runs @rocklilycottage. Designer @acidlabs on sabbatical. Outdoorsman. Archer. Gamer. Progressive. Husband. Dad. Pro 🐈and 🐕. Lives in Djiringanj Yuin country.