Rating the Accessibility of Library Tutorials from Leading Research Universities
Video and Web-based tutorials created by libraries from 71 public universities designated by the Carnegie Classification as having the Highest Research Activity (R1) were reviewed for accessibility and usability by disabled people. The results of this review indicate that a large portion of library tutorial content meets neither the minimum legal standards nor rises to the level of functional usability. Some positive trends are noted, along with recommendations for overall improvement.
A note on language: This article uses identity-first language (for example, "a disabled person") instead of person-first language ("a person with disabilities"). The former focuses on disability as an identity, while the latter focuses on the personhood of people who are disabled. Both are important concepts, and there is currently no consensus within the disabled community, or even within individual disabled communities, about which type of language is best. Our choice of identity-first language lies in the issue of accessibility itself, which requires that able-bodied librarians address disability as a part of our users' identity that cannot be ignored.
Although online tutorials are widely used in libraries, not all librarians have the same understanding of what should be counted as a tutorial. For some, tutorials are videos, typically screencasts, which guide library users through a variety of research-related tasks. Others see the "tutorial" as any digital instructional tool that guides users through research-related activity. In either of these cases, the actual content of these [End Page 803] tutorials can vary greatly, from the task-based, such as renewing a book, to the abstract, such as evaluating the authority of information found on a website. Because of this wide definition in both form and content, items that fall under the umbrella of "tutorial" include pdfs, dynamic HTML pages, flash animations, YouTube or other videos, and guides or pathfinders of all varieties. If an item instructs a user in how to execute a research-related task, and it is posted on a library website, chances are someone has categorized it as a tutorial.
Tutorials, with their wide variety of formats and technologies, produce a particular accessibility challenge. Accessibility is many things, but generally it is the ability of a user to interact with content, wherever it is. More specifically within library Web design, accessibility is the design of digital content that allows users to access library materials, regardless of disability or choice of assistive technology. To assess the accessibility of the wide variety of items labeled as tutorials requires more than a single approach.
With this focus in mind, this article will review the accessibility of a randomly selected sample of library tutorials from 71 public universities with a Carnegie Classification of "Highest Research Activity" or "R1." Goals of this evaluation will be to establish common pitfalls in library-related accessible design, as well as to describe the state of accessible library tutorials within these institutions as a whole.
The scope of accessibility for this article is taken from legal precedent, established by previous litigation. The resulting case law expanded the technical list of the Americans with Disabilities Act of 1990 and the Rehabilitation Act of 1973 (and amendments thereof) into a more meaningful generality, namely: "'Accessible' means that individuals with disabilities are able to independently acquire the same information, engage in the same interactions, and enjoy the same services within the same timeframe as individuals without disabilities, with substantially equivalent ease of use."1
The legal requirements of accessibility that stem from the American with Disabilities Act and Sections 504 and 508 of the Rehabilitation Act have evolved in response to constantly changing technologies. They have also been affected by over a decade of noncompliance in higher education via lawsuits that focus on functional usability ensured by testing by disabled users.2 Current standards require that a website or digital object be usable by some disabled person. Meeting these standards requires more than the bare bones fulfillment of disjointed technological edicts. The whole is now considered greater than the sum of accessible-seeming parts.
The alignment of the values of the library profession with providing accessible resources is explicit in the American Library Association's "Equitable Access to Information and Library Services" Key Action Area as well as in the statement "Services to Persons with Disabilities: An Interpretation of the Library Bill of Rights." Access to electronic resources by individuals with disabilities is a civil rights issue established in librarians' commitment to diversity. Unusable Web resources can be interpreted as a message unthinkable in other contexts, "Separate but equal." The recent specific citations of academic library resources in the settlement letters and consent degrees for inaccessibility bring this issue into stark reality for the profession.3 [End Page 804]
The Essence of Accessibility
Ideal Web accessibility for education-centered websites produces tool-agnostic usability for disabled users.4 For example, a website should be screen readable regardless of browser and regardless of screen reader. Video captions should work no matter how the video is viewed. Disabled users should not need different versions of the same assistive technology to access library materials any more than they should need both a Mac and a PC to access content.
Usability and accessibility share a need for well-structured content. Structure affects content in three ways: (1) it allows a high-level mental model of a website; (2) it gives context even when taken out of order; and (3) it gives consistency of experience.5
A well-matched, high-level mental model of a website is something that those with excellent reading skills or visual acuity might take for granted. Consistent structure allows users to parse text and visual structural clues. Such consistency is also the foundation of Web content evaluation: it guides quick judgments when happening upon a new website and informs further action. For example, a website that immediately loads 12 video ads might set off warning bells for authority. On the other hand, a website with an academic title; a template from D-Scribe, the open access electronic publishing program of the University of Pittsburgh; and the words "peer-reviewed" in the description might seem highly authoritative. Consistent and well-designed layouts communicate this immediate visual experience through structure, allowing quick analysis for those who have visual or learning disabilities.
This ability to use context to parse and make early decisions concerning credibility on a new website should be available to those with visual or learning disabilities as well. This kind of quick context, often visual in nature, is established for low-vision individuals in the form of clear headings, "skip to" links, and clear and concise alternative text when needed. Simple, obvious structure should exist at all levels of a document, so that even when skipping to a different part of a site, a user will have context for the next piece of information. No user should be forced to listen to every single piece of text on a website to understand what kind of website it is and what general information it contains.6
Finally, as WebAIM indicates, structure allows consistency of experience. WebAIM is a nonprofit organization that works to expand Web access for disabled people. If a user skips to any section of a website, the way back or elsewhere should always be the same. Skipping and parsing in a nonvisual way cannot be seen as a luxury, but an essential part of usability. The difference between finding a piece of research after jumping to the correct section, and getting to the same information after being forced to listen to 50 menu items and three legal disclaimers is clear: one is access, one is not. [End Page 805]
In Jodi Roberts, Laura Crittenden, and Jason Crittenden's 2011 study, 46 percent of users with a disability (which can include any disability from bipolar disorder to deafness) indicated that their disability made online courses more of a challenge.7 As libraries become more and more integrated into the learning management systems that support our resident and online courses, accessibility in online courses is crucial.
Several studies have been done on the accessibility of library websites. Investigations of general accessibility were undertaken by David Comeaux and Axel Schmetzke and by Erica Lilly and Connie Van Fleet.8 These studies revealed a significant lack of accessibility on library websites, including those of schools with library and information science (LIS) programs. Comeaux and Schmetzke commented that the lack of accessibility of LIS school library websites would likely discourage people with disabilities from joining the profession. Lilly and Van Fleet noted that their results did not depend at all on institution size or resources—the results were mediocre across the board. Instead, the choice to produce accessible content appears to be a human endeavor, one that requires human commitment and must be overseen by people, not solely handled with automation.
More specific examinations have also been completed, such Rebecca Power and Chris LeBeau's exploration of library websites' support for the database use of those with visual impairments,9 and Kristina Southwell and Jacquelyn Slater's examination of the accessibility of finding aids for special collections.10 In each case, there was a distinct lack of overall accessibility and a need for more attention, either institutional pressure for further resources, or vendor compliance. Articles have also covered the general practicalities of tutorial accessibility. Some focus on video, such as Joanne Oud's work on screencast accessibility11 and Amanda Clossen's exploration of universal design, design usable by all people without assistance or adaptation, in video creation.12 Others address the accessibility possible using tutorial creation software, such as Diana Wakimoto and Aline Soules's 2011 article.13
The literature indicates that while accessibility of library Web pages is climbing, with a maximum of around 60 percent compliance,14 libraries still fall short of an acceptable (or legal) majority for overall compliance. Furthermore, while library websites may establish themselves as compliant using an automatic accessibility checker, these checkers are notoriously flawed at establishing authentic accessibility and usability for a disabled user.15 An accessibility checker, for instance, cannot differentiate between relevant and irrelevant text when considering the alternative text of an image. The word "image," a useful description of the image, and a conversation on an unrelated subject would each be marked as "alt [alternative] text present" and pass an accessibility check, though only one is accessible. [End Page 806]
There is also no consistent use of standards or tools in the literature. Researchers use Section 508 of the Rehabilitation Act; Voluntary Product Accessibility Templates (VPATs), which list the requirements needed for a product to conform to Section 508; various levels of Web Content Accessibility Guidelines (WCAG), which specify how to make content accessible, primarily for disabled people; and automatic checkers at their individual discretion. These different standards make it difficult to provide an overall glimpse of library content's level of accessibility, regardless of format.16 But despite this variance in method, the results are consistent: library websites continue to have limited accessibility for users.
The IMS (Instructional Management Systems) Global Learning Consortium, a nonprofit organization that works to promote learning technology in both education and business, lists three groups of accessibility choices that affect learners specifically: (1) display, (2) control, and (3) content.17 As learner-focused institutions, libraries should address these choices as they make decisions about accessibility.
Display allows users to interpret content. Obviously, this cannot always be visual or auditory, and so accessible design is crucial. Research in 2001 indicated that audio descriptions vastly increased the engagement with video of learners with visual impairments. Video captions for deaf and hard of hearing learners were included in this category as well.18
Control allows the user to engage with and alter the content for better processing. For instance, a nearsighted person can use control+ to enlarge the font and images on the screen. Similarly, a visually impaired user can have the text read using a screen reader. The design of headings as well as the content of those headings make a significant impact on the ability of users with limited vision to complete a task.19 In each instance, control is vital.
Last is content, which indicates alternative media for a specific item, such as a text description instead of a large multimedia interactive device. While this fulfills the legal nature of accessible design, it is not inclusive. Nor is it practical for many libraries to design and update two copies of a tutorial—one "interactive" and one less so.
To produce a sample for evaluation, the research team reviewed the online resources of all public universities with a Carnegie Classification of "Highest Research Activity" or "R1" in 2015. Through this review, they compiled a list of all items that fell within the team's qualifications as "library tutorials." The authors reviewed a total of 71 institutions. They selected public R1 institutions in part due to the institutions' overall mission of public access, as well as their size, which produced resources that often outstripped those of smaller universities.
Because of the wide interpretation of what a tutorial can be, the research team set standards for inclusion. They defined "tutorial" as any discrete original digital object that leads users through the completion of a research task, concept, or process. Selected tutorials were organized into two categories: videos and Web-based tutorials. [End Page 807]
The researchers defined "video" as any combination of container, video, and player format containing audiovisual information that plays more or less continuously. Videos did not include self-paced modules made with SWF (small Web format) or other technologies wherein a user clicks through "pages" not containing sound. In other words, video technologies that replicate multipage modules were included as single Web-based tutorials. Any videos within these modules were each included separately.
The authors randomly selected five tutorials from each category for review using the random.org random number generator. Any institution with fewer than five items in a category was excluded from evaluation within that respective category. This process resulted in 145 videos and 60 Web-based tutorials.
The team assessed both video and Web-based tutorials based on a rubric that focused on roadblocks to usability from the perspective of a disabled user, as opposed to an accessibility checklist. This rubric was designed to prioritize the reality of access for disabled users, as opposed to the conceptual "access" that exists once a site has passed an automatic accessibility checker. It focuses on the first two accessibility choices described by the IMS Global Learning Consortium.
The researchers evaluated all video criteria manually. In most cases, they watched each video in its entirety numerous times. Five data points were recorded: (1) video type, (2) captions, (3) screen-audio coordination, (4) link context, and (5) length.
For exploring correlations, the researchers divided video tutorials into two categories based on learning objective, labeled as "abstract" or "screencast." Abstract videos, marked with a 1, were videos demonstrating information literacy concepts that cannot easily be represented by simply executing a task. These included such topics as avoiding plagiarism, evaluating information, and the like. Screencast videos, marked with a 0, were videos demonstrating a task that can be executed on a computer screen, thus creating a screencast. These videos are often database or catalog tutorials.
The authors evaluated captioning for two separate criteria. They ranked the presence of captions as 3-closed captions, 2-open captions, 1-transcripts, or 0-none. Closed captions are a separate time-stamped file that plays in sync with the video.20 Because the video player reads these captions independently, they can be turned on or off with player [End Page 808]
controls. Closed captions are, arguably, the most accessible of all caption options. Open captions are part of the video file itself and are always visible as a result. Their presence can overwhelm the cognitive load for users with cognitive processing issues, but they do not require the viewer to turn them on. Transcripts are not part of a video at all—instead they are an unsynced text of the audio that accompanies the video. While reading a transcript and watching a video simultaneously, a user will likely have trouble in syncing the text to the visual content. A ranking of 0, or "none," indicated that no text version of the audio was available at all.
The research team also marked each instance of captioning marked as 1-accurate or 0-inaccurate. Accurate captions more or less repeated, word-for-word, the narration or dialog of the video. The authors allowed occasional mistakes, such as misspellings, or slightly different wording, if they did not detract from the information communicated via the audio track (for example, the word "can't" being captioned as "cannot"). Inaccurate captions massively differed from the audio of the video. Inaccurate captions can result from incorrect timing, poor auto-transcription, or transcription error.
The coordination of screen and audio refers to audio descriptions of what is happening visually during a video. This is particularly important in the case of screencast demonstrations, which tend to talk about one process while completing an entirely different one. This issue affects blind or low-vision users who may want to listen to the narration of the video, as well as sighted users who cannot focus on two things at once, whether due to a cognitive disability or simply because they are learning something new. The researchers rated this criterion on a four-point scale consisting of 3-no significant issues, 2-a single issue, 1-multiple issues, and the least accessible scenario, 0-no audio to match the screen.
Link context identifies the quality of audio description of links selected. Link context errors can take place during screencasting, when the audio description does not name the link but instead instructs the viewer to "click here." This can affect not only blind or low-vision users listening to the audio but also sighted users who do not immediately pick up on exactly where to click. The researchers rated link context from 2-no significant issues, to 1-one issue, to 0-multiple issues.
The team ranked length by ranges: 2-less than three minutes long, 1-less than six, and 0-more than six. Due to both average attention spans and cognitive load concerns, shorter videos are preferred. [End Page 809]
Web-Based Tutorial Evaluation
The research team rated Web-based tutorials based on the following data points: headings, alternative text for images; skip-to-content links, which jump the user down to the main content; tables; text chunking; and findability. The team used WebAIM.org resources on HTML accessibility to determine both good practices and problems, except for total headings.21 The category of total headings was decided upon because of complaints from visually impaired users of screen readers about heading accessibility. The ideal number of total headings is a compromise between the true complexity (for good or ill) of websites at larger institutions and reasonable cognitive load for a user. This is a possible issue for further study, and care should be taken in interpreting this category.
To complete this ranking, the researchers used two tools, AInspector and Functional Accessibility Evaluator. AInspector is an automatic accessibility checker. What sets it apart from many others is that it provides clear information about why a feature is important and how to manually check it. Jon Gunderson at the University of Illinois Urbana-Champaign created this tool with the Open Accessibility Alliance and OpenA-jax Accessibility Task Force, two groups that support the development of open source tools for evaluating the accessibility of Web-based resources. Functional Accessibility Evaluator is for site-level (many Web pages) accessibility checks. It was also created by the Open Accessibility Alliance with the same design philosophy.
The researchers used these automatic tools only to check headings, alternative text, and tables because the coding is unambiguous: either heading tags, alt attributes (text describing an image that could not be rendered), and table tags exist, or they do not. They followed up with manual checks when there was the slightest hint of irregularity. Due to the complexity of Web-based accessibility standards, such standards will be covered in more detail in the discussion of the results.
The authors determined appropriate heading levels by running a "Headings" report in the AInspector add-on for each page of a tutorial. They then followed up manually by scanning the generated HTML source where necessary. They determined total headings by a count from this report and, rarely, correlated with a report from the Functional Accessibility Evaluator for site-wide data. They only used Functional Accessibility Evaluator when the number of headings varied between pages. Many tutorials were consistent throughout. Tutorials lost points in the two heading categories for the following reasons or any combination that rose to the level of being tedious or difficult to use:
• Zero or more than two first-degree headings
• Headings used for visual style
• Headings out of order (for example, h1, the most important HTML heading, after h2, a less important heading)
• Nesting issues, if confusing
• Mostly flat structure (for example, using 20 h2s and no h3s). [End Page 810]
The researchers determined appropriate alternative text or null (alt="") using the Text Equivalents > List of Images report in AInspector for each page of the tutorial and following up with a manual check (using "Inspect Element" in Firefox, viewing the source, or both). They manually checked each image for the appropriate alt attribute describing said image or null (empty) attribute, not only those flagged with warnings. Tutorials lost points for the alternative text category if they broke more than a few guidelines (for example, some alt attribute where there should be null, many instances of "link to" type descriptions, or many verbose alt attributes). Ultimately, the determination hinged on the question "Are any issues significant enough to make using the site tedious or very difficult for a disabled user?" Mistakes where clear effort had been made to provide alternative text did not result in a lost point unless grievous.
The authors investigated skip-to-content links manually by viewing the source of each page of the tutorial and searching for "skip," then scanning the top of the source if none was found. They also confirmed the skip target and tested the link, as well as the ability to gain focus via a keyboard. Actual usability was the standard for a point in the skip category. Any of the techniques listed in the WebAIM's article on Images > Alternative Text counted as a point.22 No sites lost a point for too many skip links or overly confusing wording. Reductions in score were given only for extant but nonfunctional links. Tutorials received points for the following, as provided by WebAIM:
• Providing visible links at the top of the page
• Providing visible links elsewhere on the page
• Making the link invisible
• Making the link invisible until it receives keyboard focus.23
The researchers investigated tables using both the Navigation > Data Tables report in AInspector and Style > Tables. They also manually searched the source for "<table" or "<tr" when appropriate. They manually confirmed the presence or absence of table headings, scopes to associate header cells and data cells, and captions by using "Inspect Element" in Firefox or searching the source for "<table" or "<th." They granted full points to pages with no tables because the designers likely chose to not use tables where no tabular data were present or to avoid accessibility issues. Tutorials lost points in the two table categories for the following reasons or any combination that rose to the level of being tedious or difficult to use:
• used for layout
• any nesting without a compelling reason
• use of cell spanning (a cell that spans two columns)
• use of multiple header rows or columns
• use of headers or identifiers instead of scopes to associate header cells and data cells. [End Page 811]
Text length was not solely based on raw sentence and paragraph counts. This method of determining appropriate chunking would have excluded many Web-based tutorials with clear efforts to chunk but with some short third sentences or paragraphs. The researchers tried to be as consistent as possible and found most tutorials readable with only a few exceptions.
Both Video and Web Evaluation
The team determined findability manually by examining library home pages and location tutorials or learning objects. A first-level link to tutorials included any home page with a link to a list of tutorials, whether in menus, in the footer, or elsewhere. A second-level link was determined to be any page that linked to a page which in turn linked to a list of tutorials. Third or greater was any number of intermediate steps beyond. For some sites, the list of tutorials included a subset that was a link to another page with a list of more specific tutorials. The researchers counted this arrangement at the higher level because they determined the difficulty in accessing the rest to be insignificant. The rankings were as follows: 2-first level, 1-second level, and 0-third or further.
Exclusions and Limitations
The researchers could access only public-facing library tutorials. Any library tutorials offered only within a learning management system, or that required any sort of user log-in, were not included. These restricted-access tutorials might differ considerably in construction or content, but the barrier to access for research is significant. Additionally, any tutorial buried deep within a content management system and not visibly linked was not discoverable and therefore not included. However, users will likely have extreme difficulty finding such deeply buried resources, so their accessibility is essentially zero.
Library guides, both from Springshare's LibGuides product as well as similar in-house products, were not incorporated into this review because they did not fall under the inclusion standards. Guides frequently list links and brief descriptions to relevant databases for a certain subject or course. By nature of the sheer volume and variety of guides at most institutions, this limitation also excluded the atypical guides that fell within the tutorial inclusion standards. Though not a part of this study, multiple accessibility reviews of guides alone would be welcome additions to the literature.
Results and Discussion
Overall Tutorial Count
Based on our initial exploration, 52 percent of R1 institutions had no public-facing library Web-based tutorials created outside of the LibGuides platform. Of the remaining 48 percent of institutions, 65 percent had only one to four tutorials, leaving 12 institutions and 60 tutorials to review (representing a random sample of 208 Web tutorials). [End Page 812]
Similarly, 45 percent of R1 institutions had no public-facing library tutorial videos at all. However, of the remaining 55 percent, 40 percent had at least five videos, leaving 145 videos to review from 29 different institutions.
The authors classified only 36 of the 145 videos (25 percent) as abstract videos. The remaining 109 videos were some variety of screencast. This included videos made with screen capture software as well as those made with a series of narrated screenshots.
Due to the ease and popularity of screencasting software, both free and otherwise, and the challenges that accompany production of non-screencast videos, it is not surprising that only a quarter of video tutorials cover topics that cannot be easily screencast. It seems likely that these learning objectives are covered in other ways, either through in-person instruction or Web-based tutorials. Though this data point was included to investigate such issues, there was no correlation in the accessible design of abstract versus screencast type videos.
Of the randomly selected videos reviewed, 77 percent had some sort of captions. Fifty-four percent had closed captions, 23 percent had open captions, and 23 percent had no captions at all. No videos of those reviewed were accompanied by transcripts. [End Page 813]
Of the closed-captioned videos, 29 percent were captioned inaccurately. Of the open-captioned videos, 33 percent were captioned incorrectly. Since incorrect captions are essentially nonsensical to the viewer, that brought the overall number of captioned videos down from 77 percent to 52 percent.
Incorrect closed captions universally resulted from YouTube automatic captioning. This system, which automatically translates the spoken audio of a video into text, is seriously flawed. It rarely provides an accurate or even sensible approximation of the spoken words it tries to render. For example, a passage from Aldous Huxley's novel Ape and Essence—"Surely it's obvious. Doesn't every schoolboy know it? Ends are ape-chosen; only the means are man's"—is automatically captioned as, "shirley it's obvious doesn't have a rich school right now it and zarate chosen only means arm bands."24 Despite this obvious flaw, most recent YouTube uploads have automatic captioning, limited only if the video owner opts out of the service or the narration audio is too low in quality for the system to work.
It is likely incorrect to say that 30 percent of closed-captioned videos were inaccurately captioned. It is more plausible that these 30 percent had no caption at all. Presumably the video creators, unaware of YouTube's methodology, are equally unaware of the deeply flawed captioning attached to their creations. The mistake is easy to remedy—automatic captions in YouTube can be manually edited allowing for easy closed captioning. However, without any awareness of automatic captions or the knowledge to fix these captions, creators cannot do so.
Issues in open captioning were more difficult to explain, though a third of these videos were also inaccurately captioned.
However, major issues included:
• Captions did not sync in any way with the audio. They either came much too early or far too late, making it impossible to see the demonstration in time with the captions.
• Captions were misspelled to the point of nonsense.
• Captions had nothing to do with the audio whatsoever and were used to make auxiliary points.
These issues do not solely affect accessibility—they are overall design flaws, detracting from the ability of the video to achieve its pedagogical goals. Because open captions cannot be turned off, their errors are even more egregious. [End Page 814]
Though of great importance to people with hearing loss, captions do not solely benefit that population. Closed captions are also useful for those with bad computer audio quality, those at public computer terminals who cannot use sound, and those for whom English is not the primary language.
Considering the ease in including captions, especially on YouTube videos, it is both inexcusable and against legal regulations that nearly half of the library tutorial videos reviewed were either not captioned or captioned so poorly as to be nonsensical. If the status of captioning at R1 institutions reflects library video captioning, it is a cause for serious concern.
Of the videos reviewed, 9 percent had no audio whatsoever. Twenty-five percent had multiple errors in screen-audio coordination. Ten percent had a single error, and 56 percent, or 82 videos, had no screen and audio coordination issues at all.
Overall, 34 percent of the videos reviewed were inaccessible due to screen-audio matching issues. Of these, some had no audio whatsoever—making the content essentially nonexistent for low-vision users and confusing for everyone else.
The remainder of screen-audio coordination issues most frequently occur when librarians explain one concept while demonstrating another. In this scenario, the second concept is never explained out loud, leading to a disconnect between what is displayed versus what is described. These coordination issues also occasionally result from delays as search results load. There is an urge to "fill up the space" to avoid wasting time, but [End Page 815] such efforts can lead to great confusion for users trying to follow along. Better practices for long waits would involve pausing the recording until the screen loads and then continuing the screencast, or cutting out dead spots in postproduction editing.
Nearly a quarter of the videos reviewed demonstrated multiple issues with link context. Six percent had only a single issue of a link not being described as anything but "here" or the equivalent. The remaining 70 percent of videos had no issues at all.
Properly identifying links is vital for those who cannot visually locate the link on the screen for any number of reasons, stemming from being simply distracted to having low vision. Naming a link when it is mentioned is not just good accessible design, it is good overall design. Users can become lost or confused at many points during a video demonstration, and providing simultaneous visual and audio cues can prevent this.
Only 14 percent of videos lasted longer than six minutes. Thirty-seven percent ran between three and six minutes. Forty-nine percent were three minutes or less. Both anecdotal as well as corporate research indicates that videos of lengths longer than two to three minutes are rarely viewed in their entirety.25 This disproportionately affects users who have issues with cognitive load, concentration, and focus. However, the average user presented with a six-minute video on a specific database will likely look elsewhere for how-to information. Among videos longer than three minutes were several 10– to 25–minute lectures on concepts easily described via a Web-based text resource.
Web-Based Tutorial Results
Appropriate Heading Levels
Only one-quarter of the Web-based tutorials had appropriate heading levels. Of the three-quarters that did not, 82 percent were improperly nested, 31 percent skipped levels, and 11 percent had duplicate headings. Fifteen sites had more than one of these issues.
Ninety-two percent of pages had fewer than 20 total headings. With these results, heading levels stand out as the single largest category of inaccessibility. Screen readers use headings for navigation, making them vital if a page is to be usable. However, 75 percent of the websites had significant issues including: [End Page 816]
• improper nesting
• skipping levels
• headings used for decoration
• headings longer than 65 characters
• headings generated by scripting that are meaningless, intended to be invisible, or both
• no headings or landmarks
• headings containing unhelpful text (for example, information, more, other).
Generally, library Web designers might not understand the purpose of using well-written headers beyond organizing online information so that it is easy to find. Users of screen readers can navigate pages in multiple ways: by reading an entire page straight through; by getting a list of headings, links, or HTML5 sections; by skipping navigation; or by jumping from paragraph to paragraph. The importance of skipping levels and improper nesting becomes apparent: a list that goes through headings numbered 3, 3, 3, 2, 1, 3, 3, 3, 6 requires interpretation instead of allowing easy navigation as someone using a screen reader clicks through each heading. Similarly, a list of "Information, Hours, More, More, More" contains little distinguishing information.
Five Web-based tutorials, all from one institution, made extreme overuse of headers. Such overuse does not appear to be a viable metric of general accessibility now, though it could be an issue of further study. There do not appear to be any published guidelines on the number of headers, but some library websites have reached a level of complexity that is problematic. For a user with a screen reader, listening to a list of 20 or more headers is unlikely to serve as an effective method of navigation.
Forty percent of images had appropriate alternative text (hereafter referred to as "alt text") for images that could not be displayed. The other 60 percent had considerable issues. Of the 60 percent that received no points, a large majority had some existing alt text, but with significant issues. The remaining images that received no points had alt text where it should have been null.
Appropriate alt text was the second most problematic area of accessibility, with 60 percent of the Web-based tutorials inaccessible. A major issue was having many images with the same alt text: using the same text for every single image on a page, or having many alt texts with "image of," "picture of" and the like. Another problem was information that only made sense if viewing the image, including visual navigation cues. [End Page 818]
Another common concern was a mismatch between good intentions and useful alt text. In some instances, websites should have contained null alt text (alt=""). This includes headers that are links with the same information twice (for example, Standard University as a link to the homepage with an alt text of "Standard University" or "Standard University logo"). This duplication causes screen readers to read the same text or the equivalent twice. Some sites also had alt or alt= as alt attributes, which will not function as null alt text (they need to be alt="").
There were decorative elements with alt texts (such as "bullet" or an image of a bullet point). Some alt texts were excessively lengthy. Some alt texts were both duplicate and unhelpful (for instance, all the images on a page had an alt text of "screen shot"). Some alt texts demonstrated clear evidence of effort to address accessibility but could not be understood without seeing the images (for example, "images showing how to get to government documents" where the image is a screenshot with an arrow to the link).
Sixty percent of Web-based tutorials had a functional skip link (or links). Seven tutorials had nonfunctional links. Skip-to-content links seemed a category where many Web-based tutorials were successfully accessible. Of those that were not accessible, some had skip links that did not actually work, including "kip" links (which breaks the functionality) and a few instances of built-in skip links in the Drupal content-management platform not being used. Some pages used a header as a faux skip link. Since this would only work with a screen reader using header navigation, it was not counted.
The largest category of inaccessibility in this area was simply having no skip link at all. Of the 24 sites that lost a point here, 15 had no skip link. The other nine made efforts, but they did not rise to the level of functionality. [End Page 819]
Sixty-two percent of Web-based tutorials used tables appropriately. Of these, only five had tables used for appropriate data. The rest used no tables at all. Thirty-eight percent of the pages used tables for layouts of some kind. Some of these layouts came from third-party widgets, including library chat and Flickr badges used to display photos on the Flickr image-sharing website. The researchers did not deduct points from library sites with tables used for a search box contained in the school-wide template, though they did deduct searches within library-wide templates. The essential consideration was locus of control. A library can choose to use an inaccessible widget or search box, but it likely cannot opt out of an institution-wide header and footer.
Appropriate use of tables was only successful in terms of nonuse. Only two Web-based tutorials had tables compatible with assistive technology. Both still lacked accessibility pieces. Thirty-eight percent of Web-based tutorials used tables for layout purposes. This is a critical accessibility issue because layout tables are unusable and irritating for users of screen readers. This was likely an issue of training in Cascading Style Sheets because these included not just "legacy" tutorials that were outdated but still used but also recently created tutorials.
The most common use of tables at the time of this study are one-cell search boxes, menus, and "fat footers," horizontal bands at the bottom of the page full of content such [End Page 820] as statements about the site, a way to subscribe, links to other articles, a sitemap, a back-to-the-top link, a categories list, and recent comments. Most of these tables live at the university-wide level, but some are library-specific. With the use of other accessibility techniques—Can the search box form be submitted with a keyboard? Can a user skip to it somehow? Does the table-using fat footer live in an HTML5 footer section with a header?—these issues may not form a significant barrier.
The biggest issue with Web-based tutorials seems to be the practical skill of constructing an accessible table. "What constitutes truly appropriate data?" is a critical first question. Having the designer listen to a screen reader read out a finished table might be the most effective lesson because it would sound like "table with 2 columns and 3 rows vertical bar vertical bar education major masters degree library science bee ai history," information that makes little sense to the user of a screen reader.
Only two Web-based tutorials used appropriate headings. None were scoped, but the research team gave points based on the linearization and clarity of headings (those that rose to the level of functionally accessible).
There is a clear lack of accessibility design knowledge when it comes to tables. None of the tables used summary attributes to specify a summary of the content of the table, table head tags, table footer tags, or scope attributes to identify whether a cell was a header for a column, a row, or a group of columns or rows.
Ninety-three percent of Web-based tutorials had appropriate chunking with only a few relying on very long paragraphs to convey information. This does not appear to be an accessibility issue, based on the data collected.
Sixty-two percent of videos required only a single click from their library's homepage to access their content. Nearly 30 percent required two clicks, and 10 percent needed three or more. [End Page 821]
Sixty-seven percent of Web-based tutorials required only a single click from the library homepage. Thirty-two percent needed two clicks, and only a single tutorial required more.
Though the data do not specifically address this issue, in many cases, understanding even the first-level links to tutorials required extensive knowledge of library jargon. This type of usability is just as much a concern for general users as for disabled users, adding another hurdle that they must jump before they can reach the help or support that they need. Librarians should be aware of the terminology used for labeling instructional materials.
The results from this and other cited studies exploring other elements of library websites indicate that there are serious deficits when it comes to library resource accessibility. These issues exist even in large, research-heavy institutions, which should be well-equipped to handle them.26 When it comes to library tutorials specifically, this study reveals serious accessibility problems in the areas of captioning videos and appropriate heading levels, but there are concerns in many other elements of video and Web tutorial creation [End Page 822] as well. This mirrors what the literature review suggests: libraries are not creating or using accessible content. This is unfair to disabled users and may even put institutions at risk for litigation.
Over 25 institutions have been targeted in letters from the Department of Education Office for Civil Rights or from the Department of Justice.27 Institutions have also been taken to court by those entities or by the National Federation of the Blind. Library web-sites, databases, or other materials have been cited directly in many these letters and litigation. These complaints have resulted in action across institutions of higher education. The startling reality is that, despite having accessibility policies for Web content, a large percentage of colleges and universities have been grossly out of compliance for more than 10 years and must work hard to comply as quickly as possible.28
For many institutions, the weight of these rulings lies on information technology (IT) departments and not on the libraries. This is particularly disconcerting when it comes to tutorials, because they are often made in the library by librarians with some knowledge of Web or video design, but less knowledge of accessibility standards. Though accessibility is a goal that has received increasing attention, it remains a goal that libraries have not yet reached.
Perhaps the greatest challenge for libraries in creating accessible content is our own lack of expertise. Not every librarian designer experiences disability or has any familiarity with the accommodations that a variety of disabled users have at their disposal. Librarians are nonexperts trying to produce materials that require proficiency in accessible design. As a result, it is difficult for us to determine what exactly is important.
The prevalence of accessibility checklists does much to obscure what needs our focus. Without understanding the reasoning behind accessible design practices, it is hard to know exactly why one would make any design choice on a checklist. Even for those who have expertise in accessible design, there must also be conceptual context for these actions. Unfortunately, few librarians have a visceral or even practical experience underpinning what they are compelled to do. One of this paper's authors once presented a live demonstration of how a library website sounds coming from a screen reader at a library conference. The shock from this experience was palpable, entirely different from the reaction to an admonishment to use alternative text.
These elements in combination with overwhelmingly good intentions may lead librarians to make design choices that hinder, rather than help, disabled users. We as a profession need to educate ourselves not only on design practices but also on the daily lives and practices of disabled people. Our users are not checkboxes of physical abilities, but instead real people who deserve accessible, usable websites. Understanding how our disabled users interact with what we create can go a long way in producing content that is universally accessible. [End Page 823]
Amanda Clossen is the learning design librarian at Penn State University Libraries in University Park; she may be reached by e-mail at: firstname.lastname@example.org.
1. United States Department of Education, Office for Civil Rights, and the University of Montana, "Resolution Agreement," March 10, 2014, http://www.umt.edu/accessibility/docs/FinalResolutionAgreement.pdf.
2. Laura L. Carlson, "Higher Ed Accessibility Lawsuits, Complaints, and Settlements," University of Minnesota Duluth, 2017, http://www.d.umn.edu/~lcarlson/atteam/lawsuits.html.
3. American Library Association (ALA), "Key Action Areas: Equity of Access to Information and Library Services," June 28, 2015, ALA, "Services to Persons with Disabilities: An Interpretation of the Library Bill of Rights," January 28, 2009, http://www.ala.org/advocacy/intfreedom/librarybill/interpretations/servicespeopledisabilities; U.S. Department of Education, Office for Civil Rights, and the University of Montana, "Resolution Agreement."
5. Rocío Calvo, Ana Iglesias, and Lourdes Moreno, "Accessibility Barriers for Users of Screen Readers in the Moodle Learning Content Management System," Universal Access in the Information Society 13, 3 (2014): 315–27, doi:10.1007/s10209-013-0314-3.
6. WebAIM, "Testing with Screen Readers."
7. Jodi B. Roberts, Laura A. Crittenden, and Jason C. Crittenden, "Students with Disabilities and Online Learning: A Cross-Institutional Study of Perceived Satisfaction with Accessibility Compliance and Services," Internet and Higher Education 14, 4 (September 2011): 242–50.
8. David Comeaux and Axel Schmetzke, "Web Accessibility Trends in University Libraries and Library Schools," Library Hi Tech 25, 4 (2007): 457–77; Erica B. Lilly and Connie Van Fleet, "Wired but Not Connected," Reference Librarian 32, 67 (2001): 5–28.
9. Rebecca Power and Chris LeBeau, "How Well Do Academic Library Web Sites Address the Needs of Database Users with Visual Disabilities?" Reference Librarian 50, 1 (2009): 55–72.
10. Kristina L. Southwell and Jacquelyn Slater, "An Evaluation of Finding Aid Accessibility for Screen Readers," Information Technology and Libraries (Online) 32, 3 (2013): 34–46.
11. Joanne Oud, "Improving Screencast Accessibility for People with Disabilities: Guidelines and Techniques," Internet Reference Services Quarterly 16, 3 (2011): 129–44.
12. Amanda S. Clossen, "Beyond the Letter of the Law: Accessibility, Universal Design, and Human-Centered Design in Video Tutorials," Pennsylvania Libraries: Research & Practice 2, 1 (2014): 27–37.
13. Diana K. Wakimoto and Aline Soules, "Evaluating Accessibility Features of Tutorial Creation Software," Library Hi Tech 29, 1 (2011): 122–36.
14. Comeaux and Schmetzke, "Web Accessibility Trends in University Libraries and Library Schools," 471.
15. Ron Stewart, Vivek Narendra, and Axel Schmetzke, "Accessibility and Usability of Online Library Databases," Library Hi Tech 23, 2 (2005): 265–86.
16. Comeaux and Schmetzke, "Web Accessibility Trends in University Libraries and Library Schools." [End Page 824]
17. IMS (Instructional Management System) Global Learning Consortium, "IMS Learner Information Package [LIP] Accessibility for LIP Best Practice and Implementation Guide," accessed October 10, 2016, http://www.imsglobal.org/accessibility/acclipv1p0/imsacclip_bestv1p0.html#1522201.
18. Emilie Schmeidler and Corinne Kirchner, "Adding Audio Description: Does It Make a Difference?" Journal of Visual Impairment & Blindness 95, 4 (2001): 197–212.
19. Harry Hochheiser and Jonathan Lazar, "Revisiting Breadth vs. Depth in Menu Structures for Blind Users of Screen Readers," Interacting with Computers 22, 5 (2010): 389–98, doi:10.1016/j.intcom.2010.02.003.
20. Raja S. Kushalnagar, Walter S. Lasecki, and Jeffrey P. Bigham, "Captions versus Transcripts for Online Video Content," in Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility (New York: Association for Computing Machinery [ACM], 2013), 32:1–32:4, doi:10.1145/2461121.2461142.
25. Billy Chasen, "Chilly—The Perfect YouTube Video Length," accessed August 14, 2014, http://billychasen.tumblr.com/post/26147024789/the-perfect-youtube-video-length.
26. Lilly and Van Fleet, "Wired but Not Connected."
27. Carlson, "Higher Ed Accessibility Lawsuits, Complaints, and Settlements."
28. David A. Bradbard, Cara Peters, and Yoana Caneva, "Web Accessibility Policies at Land-Grant Universities," in "Special Issue on Web 2.0," Internet and Higher Education 13, 4 (2010): 258–66, doi:10.1016/j.iheduc.2010.05.007. [End Page 825]