I’m not going to miss COVID-19 or handmade sneeze guards or face-mask etiquette, but I am going to miss watching all the daily televised press briefings with the American Sign Language interpreters. Their wild pantomimes are so compelling, I sometimes forget which elected official is speaking — unless it’s the one who sounds like Chris Cuomo. That guy I’d recognize in a darkened alley.
Unlike eye surgeons or chick sexers, ASL signers have a visceral dedication to their jobs. They communicate with exuberant yet controlled upper-body dance moves and facial expressions that viewers credit with getting them to sit through otherwise intolerable briefings. Smart producers put the interpreter beside the speaker, or even in the foreground, so the signing show takes up most of the screen. Control-freak politicians put them on a separate camera and shrink them down to picture-in-picture size.
Sign language interpreters are known for their “facial grammar,” a way of conveying tone and superlative that is part of the ASL patois. (One way you knew the signer at Nelson Mandela’s funeral was a phony was that he was as stone-faced as a UN interpreter.) When Doug Ford used his briefing to berate grocers for jacking up the price of disinfecting wipes, Desloges did his best to replicate the premier’s “angry dad approach,” his words. He came off looking more like an angry elementary-school teacher, but Ford appreciated the effort.
In recent days the COVID briefings have been replaced or augmented by protest briefings, giving the signers an extended time in the limelight, which to their credit they mostly shun. Before they step off stage, I’d like to point out the reason they’re there in the first place. It’s not because local officials thought it would be a great service to their deaf and hearing-impaired constituents. Today’s ASL rock stars are the result of a half century of struggle by advocates for the disabled to make the world’s most vital medium — the television — accessible to all.
That struggle often involved passing regulations and fighting in court to get broadcasters and officials to do the right thing. And it began fifty years ago this summer with an accident of technology. The government agency behind the atomic clock wanted to use TV signals to send out accurate time to the public. The idea was to take part of the broadcast spectrum that wasn’t being used for picture and transmit data over it. It was a cool but ultimately unattainable idea, and yet, like Post-It glue and Roundup pesticide, the failed experiment wound up launching a billion-dollar business — closed captions.
ABC had already struck a deal with PBS to rerun that evening’s World News Tonight on public TV stations with subtitles. This was known as open captioning because you couldn’t shut it off:
Closed captioning simply meant you had to turn it on to see the captions. Public broadcaster WETA developed the technology, showed it to the Federal Communications Commission in 1973, and the FCC promptly wrote a rule setting aside the invisible line 21 of the TV signal for closed captions. Even then it would be seven years before captioning became a nightly reality in prime time, and not until 1982 did real-time captioning technology come along.
The FCC is an agency that regulates America’s electronic pipelines and it takes a lot of pipe, often deservedly so, but its advocacy for the disabled has been strong. (Congress, which passed the American Disabilities Act and subsequent legislation empowering the FCC, merits a hat tip.) Over the years it has required closed captioning for more and more video content, including content that is put online. Virtually all broadcast and cable TV now must carry captions. Stations that don’t use ASL or on-screen graphics when broadcasting emergency alerts can expect to pay big fines, although the social opprobrium at this point is probably more costly. You don’t want to be the broadcaster or government official who gives short shrift to the deaf. (That is, unless you’re Chris Cuomo’s brother. He had to be ordered by a court to show an ASL signer at his briefings.)
The blind and visually impaired have benefited as well. You may remember when stereo sound came to TV in the 1980s. That same breakthrough allowed alternate audio tracks, like Spanish-language sports announcing — and the one we have on all the time in our house, the one that carries audio description.
The idea, developed in the 1990s at public broadcaster WGBH, was elegant and simple. In the silence of a TV show’s audio track, a voice describes key visual details that the blind or low-vision user cannot see. By 2000 the FCC was requiring stations to start describing 20 hours per week of shows.
Sadly, the broadcasters prevailed in court on audio description. They don’t have to provide a lick of it if they don’t want to, even though the cost of describing an episode is less than the cost of catering for the cast and crew. (Fox is the best of the Big 4 networks; it even describes The Simpsons.) Over on streaming, Hulu and Disney+ have dragged their feet. It’s exasperating when a show like This Is Us, which NBC finally started describing this season, jumps to Hulu and is missing its description track.
But the good news is that you, the viewer with good vision and hearing, have embraced these features for the disabled. And that means that someday, it won’t require an act of Congress to make these simple features ubiquitous. Netflix describes virtually everything it produces or acquires; it’s even paying for descriptions of older movies. Apple TV+ describes all its shows in nine languages.
Audio describers call attention to telling details on the screen that I’ve overlooked and remind me of the names of secondary characters that I’ve forgotten. As for closed captioning, that’s finding a whole new audience in millennials, who keep it on because it helps them multitask while watching TV. Like good web design or curb cuts at street corners, TV’s accessibility features have become increasingly popular outside the specialized group they were intended for.
But as voice-recognition software improves, real-time captioning will become more automated, and you know what happens when jobs get automated. The next time you see a sign-language interpreter on screen, enjoy the show, because you never know when it might get cancelled.
Aaron Barnhart has written about television since 1994, including 15 years as TV critic for the Kansas City Star.