At Be My Eyes, we spend all day, every day helping people with vision impairments get past obstacles that have been designed into their environment. And with the increasing prevalence of “smart” appliances, “connected” devices and other digital home experiences which should make peoples’ lives easier, our services are actually used more than ever.
While not every IoT product is designed with accessibility in mind, much of IoT’s promise for those with disabilities lies in its potential to enable people to communicate with and through technologies in ways that were previously unavailable. Digital assistants such as Amazon’s Alexa, or Google Home have opened up the ability to communicate with our tech via speech, while phone apps can display information about devices in a number of different, potentially accessible, formats.
Although the technology has been successful in some regards, there are clearly many limitations that still need to be addressed if IoT’s potential is to be fully unlocked--particularly for members of the blind and low-vision population. And, with an estimated 25.1 billion devices predicted to be in operation worldwide by 2025, according to the GSM Association, it is critical that accessibility issues are considered as part of this sector’s explosive growth.
It’s important to keep in mind that accessibility is not a “one-and-done” venture: a technology that is accessible for one community could still fall short when it comes to serving another. However, the following are examples of IoT devices that have improved accessibility for at least one community of users. (Please note that we are not recommending or endorsing any of the following products.)
As with both the physical environment and the web, the world of IoT devices has a long way to go before it can be considered to be meeting the needs of disabled communities. The following examples are just a few of the ways in which interacting with these devices can be a challenge for some:
Voice recognition software--a key to much IoT functionality--is often a challenge, especially for people with speech disorders or heavily accented speech. As such, simple tasks such as requesting a piece of information can be challenging, while setting up or changing credentials can prove impossible. An option to recognize repeated failures with a task and offer to connect someone with a trained volunteer can help to accomplish tasks more efficiently.
Accessibility of apps represents a major challenge to ensuring that IoT can serve the disabled community effectively. For example, even with IoS’ “speak screen” functionality enabled, Amazon’s Alexa iPhone app provides only headers, and no information on groups of technology that have been set up, as seen in the following screen capture. This severely limits the ability of a blind or low-vision user to be able to manipulate their own IoT devices without assistance:
A similar issue is evident on the HP Smart app, as can be seen below--the screen reader simply does not recognize any of the text in the boxes, meaning a user relying on it only hears a single option (“Add Printer”) out of at least 16 possibilities on the page.
The intent here is not to call out any single app or provider--rather, it is to demonstrate the difficulties that disabled users can experience when attempting to use IoT devices that have not been fully tested for accessibility.
As we have seen, the proliferation of both devices and tools to make devices smarter can have major benefits for people with disabilities. However, when these users are not considered during the creation of products and apps, these technologies can end up widening the accessibility gap, rather than closing it.
The key to providing accessible experiences for as wide a range of users as possible is to include a variety of stakeholders in the design and testing phases of any device. By inviting people with disabilities--including visual, hearing, cognitive and other--into the process, designers can ensure that their needs are met up front, rather than having to backfill or find work-around solutions post-launch.
For existing products, a similar approach of seeking feedback from specific groups of users can be used to uncover feedback that will be invaluable for designing add-ons and upgrades to extend accessibility, or for iterating on future versions of the product.
For particular groups of users, different features will provide different levels of utility. For example, while the ability to change font style, contrast or size may help those with partial vision or aging eyes to access a particular device, those with total vision loss will likely require a different solution, such as screen reader support.
In addition, consider interoperability when designing for accessibility: simply making one app or device accessible may not be as effective as ensuring that the device can work well with other apps. For example, most users prefer to be able to control a variety of smart home devices from a single app or assistant, rather than using multiple apps for each piece of technology. That kind of interoperability is particularly important to people whose disabilities may give them trouble recalling, or seeing, which app controls a specific device.
For many tools, the key to improving accessibility may be to offer additional features and services aimed at remediating known issues, or integrating assistive technology. For example, Be My Eyes provides one-to-one connections between sighted volunteers and members of the blind and low vision community. Built specifically for the needs of the community it serves, the Be My Eyes app offers immediate support for blind and low vision users, and can be integrated into a company’s existing suite of support tools--including full training of designated support stuff--in less than a week in most cases.
Many of the technologies required to build a more accessible future already exist. The key is simply to unlock their potential by considering and improving on current limitations.