Black Bars, White Text
It used to be a black box that sat on top of our television.
My parents paid a modest deposit to lease the box indefinitely from our local telecom company, occasionally returning it for repairs or exchanging it for an upgraded model when one was available. My childhood was filled with hours spent obsessively flipping through the television channels to find a program that picked up decoded signals from the box that then were transmitted on screen in black bars with white text. It was these words that opened up new worlds for me— worlds of dialogue, news, and sometimes even music—allowing me to improve my English comprehension and train myself to recognize how words sounded, especially words that I had only ever encountered through reading.
These black bars with white text gave me access to the hearing world.
Access to spaces that I, like many deaf people, were often barred from, because we were told it was too costly or technologically impossible to provide on a regular basis. Yet this access—closed captioning—was, and is, essential for our mental and social wellbeing.
Captions are words displayed on screen that provide the speech or sound portion of a program’s audio. With this access, deaf people can better comprehend speech and retain information longer while improving reading literacy. Studies have also demonstrated that captions assist language comprehension and retention in people whose first language is not English, and as Sean Zdenek explains in Reading Sounds, even people who have difficulty processing sensory or speech information have reportedly found benefits from closed captioning.1 Moreover, captions provide deaf people with the ability to communicate more freely with their hearing peers, as they’ll have access to the same social references—such as the plot of a popular television show, or song lyrics displayed for a music video.2 Captioning thus presents a sociological breakthrough for deaf people. [End Page 29]
Though the history of closed captioning has largely been framed as a history of legislative changes for accessibility and technological progress that turned captioning decoder set-top boxes into decoder chips, it is also a social history.
Captioning emerged out of protest.
It began with Cuban-American silent film actor Emerson Romero (1900–1972), who performed under the screen name Tommy Albert during the 1920s and was one of five deaf actors working in the industry.3 As was typical in small production companies, Romero also edited the film reels, wrote and corrected scripts, and wrote the intertitles—the dialogue or information text shown in between scenes. The introduction of “talkies” ended his acting career and made intertitles redundant. By 1947, guided by his experiences at the production company and responding to the deaf community’s request for accessibility in film, Romero began creating a captioned library by purchasing various titles and splicing subtitles between picture frames. He rented out the captioned films to deaf schools and clubs. Although his method was the first technique for captioning films, it was widely considered unsatisfactory, if not crude: it interrupted the flow of film and dialogue, damaged the soundtrack, and significantly extended the viewing length. Lacking funds and support from the film industry, Romero eventually abandoned this work.
Romero’s technique did catch the attention of Edmund Burke Boatner (1903–1983), superintendent of the American School for the Deaf, who founded the non-profit Captioned Films for the Deaf company (CFD) with C. D. O’Connor. CFD adapted a Belgium company’s development of etching in films (printing captions directly on the master film copy) and distributed captioned films to deaf communities; from 1947 to 1958, the non-profit captioned and distributed 29 educational and Hollywood films on its own. In 1958, U.S. President Dwight D. Eisenhower signed Public Law 85–905, which provided CFD with federal funding and support from the U.S. Department of Education. By 1979, the National Captioning Institute would expand the work of CFD to promote and provide access to television programs on ABC, NBC, and PBS. The work was time-consuming and expensive, however, as it could take up to 40 hours a week to caption one television show, with going rates for stenocaptioners up to $2,000 an hour.4
Access was still limited, so deaf people took to the streets. On May 19, 1982, the National Association for the Deaf organized multi-city demonstrations to protest CBS’s refusal to caption their programs or cooperate with the deaf community. In New York City, protesters [End Page 30] marched to the CBS headquarters, some with signs reading “CBS, please lend us, deaf people, your ears.”5 Further protests from the deaf and disability communities led to more legislative changes, including: the Americans with Disabilities Act (1990), which considered captioning as an auxiliary aid that must be provided by businesses and public entities; the Television Decoder Circuitry Act (1990), which mandated that closed-caption decoder chips be built into all television sets larger than 13 inches, a requirement later expanded to digital televisions under the 1996 Telecommunications Act; the 1998 amendment of Section 508 of the Rehabilitation Act (1978), which required federal agencies to make their products and services accessible; and the 2010 Twenty-First-Century Communications and Video Accessibility Act requiring broadcasters to provide captioning for online video content and mandating captioning for all video programs on small screens (e.g. cell phones and tablets).
The growth of social media and the increasingly digital transmission of information poses challenges to how deaf people can access online content—challenges that include the quality of automated captioning tools (aka “craptions”) that are only in place to meet the bare minimum of the 2010 Act. The inadequacy (or complete lack) of online captions led deaf activist Rikki Poynter to launch a hashtag protest campaign in 2016, #NoMoreCRAPtions.6 While the campaign has drawn awareness to the issue of poor auto-captions available on digital content, the fact remains that captioning has barely improved since the 1970s. As Elizabeth Ellcessor argues in Restricted Access, however, sufficient captioning is only the starting point: true media accessibility will only be achieved through concerted “negotiation, innovation, and resistance.”7 Ellcessor emphasizes that activism and protests—the price of inclusion in egalitarian and progressive participatory cultures—may often exclude disabled people, thus forcing them to find non-conventional approaches like Poynter’s hashtag campaign.
Even digital participatory activism has its limitations. The lack of captioning becomes acutely crucial during a global event when breaking news and real-time social media videos are being shared online at a rapid pace. If there ever was a time for black bars with white text, it’s during a crisis.
The visibility of the Black Lives Matter movement in 2020 meant we saw thousands of videos and images from the frontlines circulating quickly on social media and the mainstream news. Because these postings were immediate—and urgent—they were mostly shared without accessibility in mind. This meant deaf people could not understand video or audio dialogue, and without alt-text descriptions, blind and [End Page 31] low-vision people could not use their screen readers to adequately translate the fragmented documentation of protest and rage.
On Twitter, frustrated by the lack of captioning or transcripts of videos, I started responding to every BLM video I saw by asking for transcripts. Strangers worldwide took the time to type up transcripts in their replies, or else directed me to another video that was captioned, or to where a transcript was posted. After a few days of doing this, several regular transcribers told me that I could just send them a private message for captions, which they would then send me to publicly post, hopefully making these videos more accessible for everyone. I was not the only one requesting greater access to BLM content; but I approached it as a form of digital activism, a protest within a protest to make room for those who were being excluded, and especially for those (like myself) who were unable to join the streets for various reasons of safety, especially during a pandemic.
I demanded black bars with white text.
Want space for deaf people in the hearing world? Give us our technology.
Within a week of BLM content being regularly shared online, a small group of disabled volunteers and allies (including deaf Twitter users and CODAs—“child of deaf adults”) banded together to form ProtestAccess, a digital collaborative that would caption, transcribe, and provide alt-text for any social media BLM content.8 The group provided access to spaces that were barricaded or limited and has grown to include over two hundred global volunteers and a regular roster of tech designers devoted to making sure content is accessible. Since September 2020, they have also organized panels featuring disabled people to further circulate within the community the numerous ways in which disability access can be improved upon for everyone.
ProtestAccess’s vision is “a world in which we need not exist because accessible media is the standard.”9
This should be the mandate we all carry forward. Captions benefit everyone. [End Page 32]
Jaipreet Virdi is a scholar-activist and a historian of medicine, technology, and disability. She is Assistant Professor at the University of Delaware, author of Hearing Happiness: Deafness Cures in History (University of Chicago Press, 2020), and co-editor of Disability and the Victorians: Attitudes, Legacies, Interventions (Manchester University Press, 2020). In addition to publications in academic journals, her work has appeared in The Atlantic, Slate, Wellcome Collection, and New Internationalist.
1. Zdenek, Reading Sounds, xi. Zdenek adds that captioning sounds in particular is not an objective process, but rather requires interpretation on the part of the captioner about how much information to convey and how to capture that information. Deaf artist Christine Sun Kim’s work [Closer Captions] (2020) shows how interpretation is more valuable and descriptive when it does more than just convey fact about sounds.