Under pressure to make their sites accessible to visually impaired users, firms turn to software. But advocates say the tech isn’t always up to the task.
LAST YEAR, ANTHONY Murphy, a visually impaired man who lives in Erie, Pennsylvania, visited the website of eyewear retailer Eyebobs using screen reader software. Its synthesized voice attempted to read out the page’s content, as well as navigation buttons and menus. Eyebobs used artificial intelligence software from Israeli startup AccessiBe that promised to make its site easier for people with disabilities to use. But Murphy found it made it harder.
AccessiBe says it can simplify the work of making websites accessible to people with impaired vision or other challenges by “replacing a costly, manual process with an automated, state-of-the-art AI technology.” In a lawsuit filed against Eyebobs in January, Murphy alleged that the retailer failed to provide people using screen readers equal access to its services and that the technology from AccessiBe-not party to the suit-doesn’t work as advertised.
In October, Eyebobs agreed to a settlement in which it denied Murphy’s allegations but agreed to hire an accessibility consultant to help overhaul its website and mobile apps and dedicate staff to the issue. Like many AI startups, AccessiBe markets its technology as cheaper than paying humans. Eyebobs now must pay people anyway, by court order.
The lawsuit against Eyebobs is among a growing number in recent years accusing companies of breaching web accessibility standards. Offers to fix websites with AI technology have grown too, along with complaints from some accessibility advocates that it doesn’t work as advertised.
The case also provides a rare example of a company facing legal consequences for betting on AI technology that didn’t perform as hoped. The list is likely to grow. Advances in machine learning have convinced companies to place more trust in algorithms, but the technology sometimes isn’t up to the task.
Machine learning excels at narrowly defined problems under consistent and unvarying conditions. The world’s most interesting and important challenges often involve unpredictable environments where human judgments can far surpass those of machines.
Facebook has for years said that algorithms will fight nasty content on its platforms, but mounting evidence, including from internal documents, suggests the problem is far from being contained. Making sense of the subtleties of language, which can be highly context-specific, is one of the hardest challenges in the field. In August, the US National Highway Traffic Safety Administration opened an investigation into a series of crashes in which Tesla vehicles using the company’s bombastically marketed automated driving system hit parked emergency vehicles. Machine vision has improved, and algorithms don’t get sleepy, but people are still better at making sense of complex physical situations.
Online accessibility makes fewer headlines than self-driving cars, but technologists are applying AI there as well. At the same time, the wave of lawsuits around accessibility is driving companies to tech providers like AccessiBe.
The Americans With Disabilities Act, which prohibits discrimination in everyday activities such as shopping or working, doesn’t mention the web. It was enacted in 1990, the same year the first webpage was published inside CERN. But courts recently have opened the door to civil suits arguing that websites are effectively “places of public accommodation,” and cases have soared. UsableNet, which makes accessibility tools, estimates that in 2020 there were 3,550 such cases filed in the US, up more than 50 percent since 2018. Murphy has filed several other suits similar to the one against Eyebobs.
“There’s an untapped potential for AI to help drive and scale up how accessibility is done in digital technology but it needs to be done carefully and judiciously.”
In 2019 the US Supreme Court declined to hear an appeal by Domino’s Pizza of a lower court ruling that its website and app must comply with accessibility guidelines from the World Wide Web Consortium, which develops web standards. That and other cases have given the W3C guidelines authority approaching that of law.
Those lengthy guidelines specify best practices for digitally accommodating people with visual impairments or other needs. Running to more than 100 pages when printed, the precepts include providing alternative text for images and video; clear use of contrast and color; and ensuring that features like forms and menus are navigable using only a keyboard, without use of a mouse or finger.
Automating the work of complying with those guidelines could make the web more welcoming. But more than 600 accessibility experts have put their names to a document asking website operators to not use such automation tools, including AccessiBe. Signers include contributors to W3C guidelines and employees of Microsoft, Apple, and Google. “Automated detection and repair of accessibility problems is not reliable enough to bring a site into compliance,” the document says, accusing some vendors of “deceptive marketing.”
The site was started by Karl Groves, founder of accessibility consultancy Tenon.io, who provided a withering 35-page analysis of AccessiBe’s software to Murphy’s lawsuit against Eyebobs. Groves said he surveyed a total of about 1,000 pages from 50 websites using the startup’s technology and found a median of 2,300 violations of W3C guidelines for each site. Groves says that is a significant undercount, because most of the guidelines can only be checked by expert, manual analysis. “Artificial intelligence doesn’t work like that yet,” he says.
In his report on AccessiBe, Groves cited an image of a model wearing a white dress for sale on an ecommerce site. The alternative text provided, apparently generated by AccessiBe’s technology, was “Grass nature and summer.” In other cases, he reported, AccessiBe failed to properly add labels to forms and buttons.
On the homepage of its website, AccessiBe promises “automated web accessibility.” But support documents warn customers that its machine learning technology may not accurately interpret webpage features if it “hasn’t encountered these elements enough before.”
AccessiBe’s community relations manager, Joshua Basile, says that since he joined the company early this year it has engaged more with disability advocacy groups and clarified that it offers “manual remediation” alongside automatic fixes. “It’s an evolving technology and we’re getting better and better,” he says.
In a statement, AccessiBe’s head of marketing, Gil Magen, said the company had analyzed Eyebobs’ website and found it complied with accessibility standards. AccessiBe offers clients assistance with litigation but Eyebobs declined, the statement said.
In its own statement, Eyebobs said AccessiBe failed to respond to requests for meetings with its lawyers and provided form responses “assuring us of our web compliance.” Eyebobs is no longer working with AccessiBe nor will we in the future.”
Although the Eyebobs settlement, to be finalized next year, doesn’t include an admission its site had problems, it requires the company to pay for an external expert audit and to dedicate one or more staff to accessibility work. “Eyebobs is committed to ADA compliance and supporting all visitors who come to our website,” director of marketing Megan McMoInau says.
Haben Girma, a deafblind disability rights lawyer, says she hopes the Eyebobs suit will discourage companies from using AccessiBe or similar tools. She believes tech companies or regulators like the US Federal Trade Commission should take action against inaccurate marketing of accessibility tools. “Governments, Google, and social media companies can stop the spread of misinformation,” she says.
Experts critical of automated accessibility tools don’t generally argue the technology is wholly worthless. Rather they say that placing too much trust in the software risks causing harm.
A 2018 paper by employees of W3C praised the potential of using AI to help people with poor vision or other needs but also warned of its limitations. It pointed to a Facebook project using machine learning to generate text descriptions for images posted by users as an example. The system won an award from the American Foundation for the Blind in 2017. But its descriptions can be hard to interpret. Sassy Outwater-Wright, director of the Massachusetts Association for the Blind and Visually Impaired, noticed that the system sometimes displayed a preoccupation with body parts-“two people standing, beard, feet, outdoor, water”-that she dubbed the “beard quandary.”
In brief tests by WIRED, Facebook’s algorithms appeared less beard-obsessed but were often cautious to the point of vagueness, saying that a close-up of freshly made bagels “may be an image of food” and a beach on a clear day “may be an image of outdoors.” Facebook said in January that it had significantly improved the system, including by recognizing more activities, animals, and landmarks. The company did not comment.
Judy Brewer, a coauthor of the 2018 paper and director of W3C’s accessibility initiative, says AI has advanced since then, but she remains cautious about the tools. “There’s an untapped potential for AI to help drive and scale up how accessibility is done in digital technology,” she says, “but it needs to be done carefully and judiciously.”