GLAD vs CNN Closed-Captions Lawsuit: Finding a Win-Win for Broadcasters and Deaf People

by Professor Jonathan Hassell
Originally posted 8 Feb 2012

On Saturday a Californian court refused to dismiss a suit by the Greater Los Angeles Agency on Deafness (GLAD) against CNN for its refusal to add closed captioning to news video clips on its website (for more details see: CNN sued over lack of closed captioning on website).

I’m in a privileged position to comment on this case as I was the manager behind delivering workflows to caption over 90% of BBC iPlayer’s programmes, and having worked with the BBC’s News site to investigate what they’d need to do to add captions (or subtitles as they are often called in the UK) to their news video clips.

Lack of captions is the number one accessibility concern of most Deaf and hard of hearing web users – without captions online video is pretty much useless for them. And the amount of video on the web is growing exponentially – Gartner reckon that over 25% of the content that workers see in a day will be dominated by pictures, video or audio.

So the user-needs behind this lawsuit are substantial, and the issue is only going to become more and more important (and contentious) over time.

In that context, how do we understand CNN’s position on online captions, and what impact might the lawsuit have on broadcasters globally?

Context: is a culture of accessibility litigation emerging?

This lawsuit comes after last week’s lawsuit in the UK by RNIB against bmi-baby, on which I gave my expert thoughts regarding how BS 8878 can help prevent other organisations getting sued.

On both sides of the Atlantic, disabled groups are mobilising to use available legislation to challenge organisations that refuse to make websites which meet their accessibility needs.

So organisations that own websites would do well to understand how to balance the needs of their disabled users with their own needs to protect their brand values, unique selling points and profitability.

Online captioning 101: what’s needed for captioned online video

To deliver captioned video via the Internet an organisation will require four things:

  • 1.A media player that can play captioned video;
  • 2.A way of creating that captioned video;
  • 3.A way of letting users know which of their videos are captioned and which are not; and
  • 4.An caption production workflow – a process that enables the organisation to ingest video, get captions created as quickly as they need, and publishes the captions with the video, reliably and efficiently.

The easier requirements

Choosing a media player that allows you to display captions on online video is comparatively easy. Nomensa recently released the source code of their accessible media player to the public, and other players are also available.

Getting captions created for online video is slightly more challenging. The BBC Online Subtitling Editorial Guidelines advise on how to create captions that can be easily seen and read. EBU-STL and timed-text standards exist for encoding online captions (and AMI’s Robert Pearson is presenting at CSUN 2012 on proposed standards for descriptive video). And commercial tools are available to create closed-captions for pre-recorded and live video. The main challenge posed to organisations by this requirement is cost, as the creation of quality captions still requires human effort, and so doesn’t scale easily or cheaply.

The requirement to include visual cues for letting users know which videos are captioned and which are not often gets overlooked, but are essential for deaf and hard of hearing people to benefit from sites which include some captioned video. However, it isn’t rocket science for sites to ensure they get this right.

The crucial importance of the caption production workflow and its speed

Creating an efficient and reliable video publishing workflow which includes captions is not easy.

And it’s this requirement that gives some insight into why CNN’s lawyers are defending their position on the lawsuit by talking about “free-speech rights” and “violation of their editorial practices”.

To understand the heat behind this language, you need to understand the essential difference between long-form video and news video clips (which, thankfully, the FCC do, as their 21st CVAA IPTV captioning rules already make this distinction).

Long-form video publishing is not particularly time sensitive, even for TV catch-up services like BBC iPlayer. If a programme is online an hour or so after broadcast, that’s generally acceptable.

But most news operations’ most expensive commodity is time. If they wait to get their video captioned before publishing it online, and another broadcaster gets their video online more quickly because they don’t wait for captions to be created, they will have lost one of their unique selling points – being first with reporting the news.

After all, news is only really news when it’s new – every second counts.

So caption production workflows need to be able to produce captions as quickly as the video encoders can get the video ready for online publication. This requires super-fast, high-quality, automated, speaker-independent caption generation. And this is still a long way from being available.

This is why CNN are arguing that it would be unfair for them to have to caption their clips unless the same rules are applied to all of their competitors.

I’d agree with them – regulation would need to apply all broadcasters or none to avoid giving one broadcaster unfair advantage over the others. And, bearing in mind news is a global commodity these days, I’m sure CNN would say that this regulation shouldn’t just be for all US broadcasters but all global broadcasters, or else the BBC or others could steal their thunder too.

Even then their scoop could be trumped by a ‘citizen journalist’ video blog that doesn’t care about disability law and waiting for caption generation.

So, welcome to complete stalemate – we need one global law for everyone, or none for anyone.

This is why none of the broadcasters, including BBC News, have rolled out captions for news clips in any meaningful way yet.

Even if the CNN case ‘sets the precedent for the whole industry’ as Laurence Paradis of Disability Rights Advocates thinks, it’s unlikely that this precedent will give deaf people what they want.

So can the stalemate be broken?

Well, yes.

But people who want captions might just have to concede the war to win the battle.

They are unlikely to ever get a law requiring broadcasters to create captions before they publish a news clip.

But they are on much more reasonable ground to ask for broadcasters to subtitle news clips within 24 hours, say, of them being published.

Yes, I know that isn’t equal treatment for deaf and non-deaf people. But it is something that I believe can be argued for in court, as it’s expensive but achievable.

It’s costly because it requires the broadcasters video publishing workflows to be reviewed to include the production and publishing of captions. On top of the costs of creating the captions for each clip, the costs of updating workflows in large broadcasters, which are delivering video to both broadcast TV and online, is not trivial by any means.

But it is do-able, because the enabler here is closed-captions. For those not in the know, open-captions are those ‘burnt into a video’ that the viewer can’t turn off, whereas closed-captions are those that the viewer can turn on and off because they are delivered separate from the video, and synchronised with the video by the media player.

It’s this separate delivery that allows closed-captions to be delivered using completely separate workflows from those used to publish the video.

So this allows for clips to be published as immediately as they are now, without having to wait for captions, which can be added minutes or hours later.

In this case, unlike for most accessibility issues, broadcasters can reasonably easily retrofit closed-captions by adding new caption workflows to their existing video publishing workflows, without having to do much alteration to those existing precious workflows.

How broadcasters and deaf people can achieve a win-win

I believe this publication of captions after the publication of the video is the most likely outcome of the lawsuit, unless the GLAD complainants insist on trying to challenge the stalemate. If they do, I think CNN will and should win.

If level-headedness prevails, and the two sides can come to some accommodation, is there a way for them both to come out of the suit having won?

I think there is, because there are benefits to enriching news clips with captions that go beyond helping deaf and hard of hearing people.

Many people without hearing difficulties also use captions, especially the many office workers that web analytics and contextual research has identified who browse news sites at their desks in their lunch-hours. Given the prevalence for open-plan offices, many of these who don’t have headphones with them will turn on captions or avoid online video.

The other benefit is findability and SEO. Clips enriched with captions (and/or transcripts based on these captions) are clips whose content can be indexed by search engines in the same way as text on a webpage (CNET reported a 30% increase in traffic after providing transcripts for videos).

How should CNN and other broadcasters react to this case?

My recommendation is that all broadcasters place a higher priority on investigating how to put in place the right workflows to enrich their news clips with captions.

They may be required to do so by legislation at some point.

In the meantime, being first to enrich their video with captions may give them a new unique selling point: that their clips achieve higher google rankings and so reach a wider audience.

I’d be happy to help any broadcasters or online video providers investigate how captions could be added to their business-as-usual video production processes, based on our existing experience at Hassell Inclusion.

Please contact us if we can be of any help.

Reproduced from