Against AI:Critical Refusal in the Library
The field of library and information science (LIS) has seemingly embraced the presence of artificial intelligence (AI) within libraries and archives, rather than taking an overtly critical stance. For a discipline founded on providing information and propagating literacy, supporting the use of generative AI within LIS institutions is arguably contradictory. LIS workers are encouraged to respond neutrally or even positively to generative AI as if it is an objective tool to use; I argue that they should, in turn, be critical of the technology and empowered in their choice to refuse generative AI within their employing institutions. Through a review of the relevant literature and contemporary discourse of generative AI, this paper critiques the notion of "AI literacy," discusses critical practice in LIS, and surveys the ethics of and issues involved with integrating generative AI within libraries and archives, including its implicit and explicit reinforcement of racism, hate speech, and prejudicial caricaturization; data collection practices and corporations' lack of regard for user privacy; and its extraneous and well-documented environmental impact. I contend that LIS workers should be concerned with materially supporting values of antiracism, sustainability, the public good, intellectual freedom, and privacy, which includes an antagonism toward adopting technologies without critical comprehension. With the understanding that workers have varied levels of control within their institutions, I make the case that in terms of generative AI, a politics of refusal is necessary. Refusing generative AI demonstrates our resolve as critical practitioners, workers, and individuals; disengages with racism and prejudice as already embedded into LIS institutions; protects the creativity and labor of workers and privacy of community members; and precludes the possibility of increasing carbon emissions and electronic waste.
artificial intelligence, generative AI, critical librarianship, critical AI studies, climate change, AI literacy, professionalism, technology
[End Page 588]
A real engagement with the fact that people and the planet are dying ought to enable us to resist the temptation of abstractions, to speak clearly about our goals and strategies. But this in itself is a political question: we have to ask who benefits from the abstractions and rhetorical moves of our professional discourse.
Introduction
In the United States, the provision and production of information occurs through a few particular avenues, publicly funded by varying degrees: libraries, archives, museums, and the internet. While property taxes, municipal expenditures, and outside donations are mainly how cultural heritage institutions are funded, the facilitation of information storage and retrieval has been largely privatized as corporate-controlled databases are the primary mechanism of access for editorialized, pertinent, and peer-reviewed information. Regardless of their public, academic, and special or research status, libraries, archives, and museums are forced to purchase—rarely own—access to information through contracted agreements with vendors and publishers and are at the whim of increasing prices and stipulations to collect data on certain aspects of patron usage. Such vendors are not typically publicly owned and can be bought by private equity firms (OverDrive 2020). The work of maintaining physical collections needs to be balanced with desires for digitization, controlled digital lending, and the building out of digital collections onto expensive servers. Balancing the patron's need to access information with budget restrictions and the need to keep up with new technologies complicate our principles and the privacy of community members, as well as force us to contend with the privatization of information. This precarious balancing act has set the stage for library and information science's seeming embrace of generative artificial intelligence (AI).
This article makes the case that, first, this embrace of generative AI, reticent or otherwise, is dissonant to the field's institutional ideals of supporting the public good, wishing to provide access to information that has integrity, and abiding by sustainable practices whenever possible. Second, what is unfortunately not dissonant is the field's quick rationalization that technological solutions are ethical, simply because they illusively meet the [End Page 589] immediate needs of staff and community members (Glassman 2017). I argue that this rationalization happens because library and information science (LIS) practitioners consider technology, their labor, and its interaction to be neutral and in so doing separate themselves from generative AI's material conditions (Morrone 2018). Third, the utilization of generative AI signals a shift in the responsibility of facilitating ethical labor practices in LIS, operating at some degree for the public good, onto privatized technological solutions that are constantly changing and fetishized (Glassman 2017; McQuillan 2022). Technology cannot fully substitute the labor of a worker, no matter how much or how quickly we want our field-wide problems of precarity and burnout to be solved (Glassman 2017; Haider et al. 2022). Further, the technology of many public institutions in the United States is generally a decade behind private industry capacity (Harris 2021; Select Committee on Artificial Intelligence 2023). The reality is that our work has always intersected with technology, and this intersection has material, social, and cultural impacts.
While it is important that LIS workers are able to communicate to community members about technologies that are relevant to them, should the field itself utilize and uplift generative AI as a solution to present and future problems? Is this willingness to integrate generative AI antithetical to our principles of organizing, managing, and providing access to information with integrity? In seeking to bridge the digital divide and save ourselves time, do we sacrifice our principles in favor of acquiescing to the latest trends in technology? Should we not consider how we, as human beings tasked with stewarding and cultivating spheres of knowledge and access to information, can accommodate what best meets the needs of staff, patrons, and community members with integrity? With these questions in mind, I encourage an active refusal of generative AI and in refusing, considering what can be produced and facilitated otherwise. Readers should feel emboldened by the value of their labor, the histories of cultural heritage that are contingent to our work, and the critical frameworks that guide the future of LIS.
In this article, I consider the critical issues surrounding the adoption of generative AI within cultural heritage institutions to make the case for refusing this type of technology. I support this discussion by reviewing adjacent scholarly literature from media studies, communication, and technology journalism. Operating under the presumption that the comprehension of information is subjective, I think through the importance of a critical perspective when interfacing with technology in the library or archive (Glassman 2017; Lloyd 2005, 2007; Morrone 2018; Seale 2016). Such a perspective requires (a) imagining all possibilities within our field and the societies we belong to, including our disposition as workers in LIS, as originating from tangible infrastructural conditions derived from an institutionalized understanding that society is organized hegemonically, (b) [End Page 590] accepting that this sociocultural organization was founded and continues to be propagated with white supremacy as its guiding framework, and (c) acknowledging the causes and effects of such a society of culture we live with, through, and despite (Chiu et al. 2021; Glassman 2017; Leung and López-McKnight 2021; Lloyd 2007; McQuillan 2022; Morrone 2018; Schlesselman-Tarango 2016). As a white graduate student and public library worker, recognizing the role that people who look like me have played in both the construction of technology and the work of librarians and archivists is a guiding principle of this discussion.
Background
Our work in libraries and archives produces, reifies, and provides access to history, culture, and information. Technology is the means by which we, as well as our community members, manage and interact with cultural objects, historical artifacts, and pieces of media. This is a foundational dynamic of the professionalized fields of library science and archival practice; manuscripts themselves are technology (Coyle 2016; Emerson 2014; Innis 1950). The systems that organized this information, such as card catalogs, became models for the organization of databases (Coyle 2016; Ercegovac 1998).
With the invention of the internet, cataloging itself shifted in its codification and organization of materials to networked environments; this change in bibliographic standards occurred alongside mounting pressure from administrators to catalog more efficiently and at lower costs (Ercegovac 1998; Hoffman 2012). The internet connected academic institutions and allowed for easier communication between librarians and archivists, enabling more efficient reference services than had historically been possible previously (Coyle 2016).
It is still recent history that information access was restricted to the physical spaces of libraries and archives and that nonwhite individuals, as well as individuals with disabilities, were barred from entering such spaces (Cooke et al. 2022; Knott 2015; Pionke 2017; Wiegand 2017). The history of providing information through reference services in LIS is grounded in civilizing the uneducated and the impoverished (Knott 2015; Pawley 1998; Schlesselman-Tarango 2016; Wiegand 2017). It seems that there is a prevailing notion concerning technology in LIS that data, software applications, and computational information are neutral in their orientation and, further, that it is acceptable to affix technological solutions to social problems, rather than make space for social solutions (Morrone 2018). Much like how the librarian and archivist are themselves not neutral in their work, technology is determined by the social, cultural, and material conditions from which it is fabricated, produced, and designed (Chiu et al. 2021; Cooke and Kitzie 2021; Cooke et al. 2022; Ettarh 2018; Faulkner 2001; Fernandez 2023; Gibson et al. 2017; Glassman 2017; McQuillan 2022; [End Page 591] Monahan 2009; Morrone 2018). A demonstrative example of these relations is the practice of utilizing un- and underpaid prison labor to digitize and input metadata for yearbook materials that are uploaded to genealogical resources such as Ancestry.com (Howard 2023). This digitization process is not made transparent, exploits the incarcerated individual by not paying them adequately (or at all), and does not provide the means to be hired by cultural heritage institutions after incarceration, even as these programs are advertised to those incarcerated as job training (Howard 2023). The obfuscation of how technology is truly utilized and its infrastructural politics hurts all of us in a world veering toward an unrelenting reliance on technology. This is especially so for those who do not have the privilege to avoid incarceration; maintain consistent employment, housing, and mental and physical health; and sustain access to computers, cell phones, and other information and communications technologies (Austin 2019, 2020).
As painter and technology writer Zhanpei Fang has articulated, "The spectre of 'artificial intelligence' is a reification—a set of social relations in the false appearance of concrete form; something made by us which has been cast as something outside of us" (2024). Generative AI is a false promise, marketed heavily as a solution to creative, curatorial, and communicative issues we are more than intelligent enough to solve (McQuillan 2022; Mumford 1952; Newfield 2023). It is thus distressing, albeit unsurprising, to witness so many LIS professionals accept the presence of generative AI. "AI" is effectively a catchall marketing term that signifies an abstract research and development trend in the American technology industry; it includes an array of technologies, algorithms, and programmatic applications that automate actions meant to fill in gaps in human intelligence and labor (Bender et al. 2021; McQuillan 2022; Nagy and Neff 2024). What this article is concerned most about is the generative type of AI, rather than machine learning applications more recognizable as closed-system algorithmic scripts that aid in the automation of pattern recognition and information retrieval. In the context of LIS, these machine learning applications have gained relevance in their ability to search for text, such as optical character recognition, and speech-to-text transcription.
Generative AI, more specifically, includes applications that produce fabricated images, video, and audio from databases of files, documents, and computational information that is scraped from the web or purchased by the large corporations developing these technologies (Lavigne 2023). Whether this occurs legally or illegally is for another author to discern. LIS workers have begun to publish literature reviews, case studies, zines, and scholarly interrogations of the technology (Fox 2024; Fruehauf et al. 2024; Halvorson 2024; Hersh 2024; Hosseini and Holmes 2023; Kavak and Yilmaz 2024; Oddone et al. 2023; Setele 2024; Vogus 2023). While I do not doubt that there are other critiques like mine to be published, based [End Page 592] on the amount of hype about generative AI in LIS—from various pop-up conferences, newsletters, educational presentations, and instructional content—it seems as though professional discourse is consumed with demonstrating potential applications, rather than critique.
As this type of AI has exploded in popularity, the term generative has come to have a multifaceted meaning; does it make magic happen by generating something brand new (Nagy and Neff 2024)? Does it generate an intense amount of energy? In actuality, it comes from generative adversarial network (GAN), the name of the type of system that develops a "new" version of a piece of data from checking against batches of trained data. To me, the term generative relates to the systematized reconfiguration and coagulation of data that is represented as a "novel" digital object to suit some conceptual purpose (Hall 1997). The generative in generative AI insinuates the creation of something out of nothing, when it really is just a bunch of somethings haphazardly shaped together at randomized intervals (Hicks et al. 2024; Nagy and Neff 2024).
Expert researchers, academic faculty, and technology workers continuously blow the whistle on the harms that generative AI enacts (Bender et al. 2021; Buolamwini and Gebru 2018; Burrell 2024; Gebru and Torres 2024; Hao 2020, 2024; Newfield 2023). For example, large language models (LLMs)—generative text-based chatbots that coagulate responses to inputted queries—use material sourced through the scraping of web data and purchase of databases (Bender et al. 2021; Lavigne 2023). LLMs and other forms of generative AI are trained through algorithms, which are scripts of code that check new information against acquired data; the checking itself depends on an entire economy of underpaid workers in Kenya and India (Chandran et al. 2023; Perrigo 2023). After these scripts run, training occurs by a combination of individuals accepting or denying relevance or congruence and algorithms that cross-reference data points—text, images, and sounds—that were previously labeled with particular degrees of importance or relevance (Chandran et al. 2023; Hutchens 2023; Perrigo 2023). As the amount of data available to train against increases, more patterns can be recognized, and this recognition provides the illusion of context.
The implementation of generative AI in social contexts has had critically racist effects. The acquisition of data, as well as the process of cross-referencing, does not explicitly filter for racism, hate speech, and prejudicial caricaturization, even as a tool such as ChatGPT restricts user input with these same characteristics (Bender et al. 2021; Hao 2020; Keyes and Austin 2022; Khlaaf et al. 2024; Newfield 2023). Researchers at the Massachusetts Institute of Technology and Penn State found that LLMs that can process videos, such as ChatGPT 4, Gemini 1.0, and Claude 3 Sonnet, made incongruent decisions about whether to recommend calling the police according to the "subjects' skin-tone and gender, and the characteristics [End Page 593] of the neighborhoods where videos were recorded" in Amazon Ring surveillance videos (Jain et al. 2024, 1; Zewe 2024). GANs are structured to fill in the gaps of identification and discernment, which creates issues when they are instrumentalized to determine who can access points of entry; it has been shown that algorithms in these applications frequently cannot comprehend skin tones that are not white (Buolamwini and Gebru 2018; Drage and Frabetti 2022; Jain et al. 2024). Further, critical media studies scholars argue that the impetus for the technology industry to acquire an exponential amount of data to operate GANs and LLMs at a massive scale is driven by a desire to produce technology beyond human capacity toward a utopia where supposedly no inequality—except between human and machine—persists; and such a desire demonstrates a Foucauldian drive to categorize markers of race, gender, and how efficient one can be under technocratic capitalism to serve the interests of power (Buolamwini and Gebru 2018; Burrell 2024; Gebru and Torres 2024; Jain et al. 2024; Martinez 2024; McQuillan 2022; Monahan 2009). The more data points that are created, the more information that can be sold (Lamdan 2022; Lavigne 2023; Madianou 2019).
The corporations that develop generative AI technologies do not consider protecting the privacy of users. OpenAI rescinded their company-wide ban on engaging in contracts with law enforcement and the military and are regularly in the news for refusing to make their operations and data collection practices transparent, bypassing legitimate requests of actors to not use their likeness, and scraping books without the permission of publishers and authors (Biddle 2024; Brittain 2023; Davis 2024; Lavigne 2023). In 2024, Google began including "AI overviews" in search results that produce incorrect information; these AI overviews use language data farmed through Google's LLM, Gemini (Williams 2024). Researchers at Google found that generative AI, in its rearranging of information without accountability, is the leading cause of disinformation (Maiberg 2024). Microsoft and Google are now unable to meet their initially set climate goals due to the development of their generative AI tools (Hao 2024; Kerr 2024). One must wonder what these so-called innovations are for, if not for the increase of profits by corporations that have oligopolistic control of nearly all aspects of technological infrastructure and the further sanctity of partnerships with law enforcement to ensure the smooth transfer of data depicting your likeness, location, and biometric description (Baykurt and Lyamuya 2023; Benjamin 2020; Egbert 2019; Lamdan 2022; Madianou 2019; McQuillan 2022; Minocher and Randall 2020): data of, about, and for you that was never considered to be your own property in the first place, all owned by a handful of corporations that determine the possibilities and limitations of how users—us—move through the world, including how and where they work and what kind of housing they can access (Benjamin 2020; Hearn 2022; Lamdan 2022; Madianou 2019; Nopper 2019; [End Page 594] Scannell 2019). As librarians and archivists, I believe we have the responsibility to inform patrons, students, and community members about what is materially involved in engaging with such technology.
"AI Literacy"
In a feature for Public Libraries magazine, a colleague and I discuss how public library workers in particular may want to navigate this multivariate responsibility (Ong and Slater 2024). While writing this piece, we felt a strong desire to be more critical of generative AI. At the same time, we understood that when publishing in a professional association magazine—especially in an issue that included writing that speaks positively about the "future" of generative AI and instructs practitioners how to use it—criticizing AI could be met with pushback.
We had initially titled our piece "Taking a Critical Approach to AI"; our final edits were returned to us with the title "AI Literacy in Libraries: Inspiring, Engaging, Empowering." While there were no further edits to our actual writing, we responded to the editor curious as to why the term "AI literacy" was specifically chosen. Because we felt that our discussion mainly dissected sociocultural issues and spotlighted experts, we instead suggested the term "AI comprehension" or "synthetic literacy." While we did include a small section on an experience I had working with a patron who believed ChatGPT would do all of his work for him, we felt that "AI literacy" as a term is precarious and inaccurate; it situates the user as one learning from generative AI in order to grow knowledge and demonstrate comprehension. Generative AI does not build skill in the same way that a human being develops the ability to read or use a computer through concentrated practice. Particularly when LLMs regurgitate random, albeit aesthetically contextual, information, how is literacy possible (Bender et al. 2021; Hicks et al. 2024)?
Arguments that are made for "AI literacy," rather than media or algorithmic literacy, attempt to shift the focus of comprehension from reading text, images, or video that are cultural objects encoded with meaning to understanding the functionality, code, and user interface of algorithms (Cotter and Reisdorf 2020; Hall 1973; Reisdorf and Blank 2021; Ridley and Pawlick-Potts 2021; Silva et al. 2022). Reframing "AI literacy" from literacy to comprehension takes an interdisciplinary approach that privileges critical engagement with media studies and technological infrastructures. I think of literacy as a practice of language comprehension that involves an understanding of the meaning of a text or work, such that it demonstrates competence through communication, "accomplishes a range of purposes," and "attain[s] personal benefits in ways that are shaped by cultural contexts and language structures" (International Literacy Association, n.d.). Stuart Hall's (1973) encoding/decoding model, while more aligned with media literacy, provides a framework for grasping how the [End Page 595] social and institutional relations involved in the production of media are the avenue through which meaning is communicated. Generative AI can assist in accomplishing a task only in our contextual understanding of the material it is rearranging or making apparent; literacy happens when we read, decode, interpret, or communicate such material's meaning, importance, or cultural relevance (Hall 1973).
Knowing how generative AI works is more aligned with competency, rather than literacy; understanding its sociocultural position and relevance is more related to grasping the discourse that surrounds the use of the tool. Applying the concept of literacy to AI misunderstands the impact, capacity, and material nature of what these tools are and what their creators intend them to be. Algorithmic literacy could be more inclusive of the types of artificially intelligent tools that manipulate and appear to generate media empowered by algorithms, the origins of which range in format—text, image, or sound—and are understood to exist as synthetic digital objects (Cotter and Reisdorf 2020; Ridley and Pawlick-Potts 2021; Trace and Hodges 2023). Not all automated software tools and algorithmic scripts that are placed under the umbrella of AI are actually generative; some of these cases may benefit from a recategorization as automation powered by closed-system machine learning, as discussed previously.
And if we come to be literate in terms of generative AI tools, what use is this literacy? Those who are at the helm of the companies that develop generative AI tools themselves do not fully understand how their tools work, are not interested in repairing the harm they cause, and are adverse to regulation (Adarlo 2024; Burrell 2024; Cotter and Reisdorf 2020; Gebru and Torres 2024; Lavigne 2023; McQuillan 2022; Patel 2024; Phan et al. 2022). "AI literacy" and the hype around it pushes the idea that this "innovative" technology is as structurally present as fiber-optic cables, that we had better adjust our sails to the winds of actively harmful corporations that exploit workers and refuse to tell the public how the code of these tools is constructed (Chandran et al. 2023; McQuillan 2022; Miller 2023; Perrigo 2023; Phan et al. 2022).
As LIS workers, we have a responsibility to engage critically with technology. Vendors and corporations that pilot generative AI claim that these tools are objective in their training. This claim, in my view, is to ensure that the most amount of people and organizations possible adopt them at every level of work. Anointing generative AI as the solution to problems unable to be solved by workers who would otherwise require additional compensation or training should not be used as the rationale to replace their entire labor capacity (Amram et al. 2023; DeZelar-Tiedman 2023; Newfield 2023). Some may feel that this claim is, itself, alienating; I implore readers to consider what is at stake beyond yourself when engaging with such tools and to situate them as extractive, rather than generative.
The work of LIS is strained by the emotional and affective labor [End Page 596] required to serve community members, particularly with regard to technology assistance, the corporatization of information retrieval, and the lack of fiduciary support from government bodies, from municipal to state to federal, among other infrastructural constraints (Glassman 2017; Popowich 2019; Rhodes et al. 2023). Whatever balance is struck between the stringency of labor, our orientation toward technology, and keeping our heads above the water is work that we act on as a labor force and community of practitioners.
The sheer popularity of tools like ChatGPT makes it difficult to escape their relevance among the communities that we serve. Regardless whether patrons wish to learn how to navigate ChatGPT for their personal use, I argue that we should not uncritically accept its popularity at face value. Its cultural prevalence does not correlate to ethical standards of information services created and supported by LIS practitioners.
Critical Practice: Integrity, Professionalism, and Sustainability
Based on its exploitation of workers, the harms it reinforces toward individuals and the environment, its precarious infrastructure, and the speed at which the technology is iterating—as well as the principles of major professional associations such as the American Library Association and the Society of American Archivists—it may benefit workers, as well as the communities we serve, to actively refuse the integration of generative AI into library and archives operations through critical engagement. Refusals, as anthropologist Audra Simpson puts it, "speak volumes, because they tell us when to stop" (2007, 78). As LIS workers face extreme attacks, including but not limited to challenges to materials, the suppression of free speech, threats of incarceration, and the elimination of funding, we should act with integrity to reinforce a heralding principle of the field: access to information that has integrity (Cooke et al. 2022; Kingkade 2023; Knox 2015, 2020; Kohlburn 2023; Montague 2024; Oltmann et al. 2021; Shumaker 2022).
Critical librarianship and critical archives studies as focuses of practice have gained visibility in recent years. Workers critically engage with and deconstruct the demographic majority of LIS as historically and presently white, cis, middle- and upper-class women who, as of only recently, have begun separating their work as agents of the state or institution from their, as well as their organizations', ties to white supremacy and socioeconomic stability (Brook et al. 2015; de Jesus 2014; Gibson et al. 2017; Glassman 2017; Hathcock 2015; Leung and López-McKnight 2021; Pawley 1998; Popowich 2019; Punzalan and Caswell 2016; Schlesselman-Tarango 2016; Shumaker 2022). The work of scholars, librarians, and archivists such as Fobazi Ettarh, Mario A. Ramierz, Tonia Sutherland, Sofia Y. Leung, Jorge López-McKnight, Ricardo L. Punzalan, Gina Schlesselman-Tarango, Stacie [End Page 597] Williams, and many others, as well as the election of former American Library Association (ALA) President Emily Drabinski, a critical scholar, encourages the presence of criticality, decolonization, and antiracist scholarship and practice in the field.
According to the US Bureau of Labor Statistics (2024, 3), library workers are likely to be white women; while whiteness should not box workers out from engaging in antiracism, combating transphobia and homophobia, and having an awareness of political economy, it is no secret that whiteness and other privileged affinities operate in LIS to uphold racist institutional practices founded in the subjugation of individuals and communities not deemed civilized, educated, or able (Brook et al. 2015; Cooke and Kitzie 2021; de Jesus 2014; Ettarh 2018; Fry and Austin 2021; Gresham 2024; Hathcock 2015; Leung and López-McKnight 2021; Pawley 1998; Pionke 2017; Rhodes et al. 2023). Perfectionism, paternalism, power hoarding, objectivity, the worship of the written word, and especially the notion of expedient, ever-expanding progress are prevalent in the field and are also characteristics of white supremacy culture (Okun 2021). As explicated by Fobazi Ettarh (2018), vocational awe understands the field as presently and historically feeling as though it is above critique and that stewarding collections, presenting programming, and providing resources and databases are all-consuming, sacred acts analogous to the work of priests. These traditions are passed down, reinforced, and contended with at any stage of work in cultural heritage institutions.
The Master of Library and Information Science degree engages students in a survey of the various aspects of the profession, such as exposure to collection development, cataloging, and reference services, and prepares future LIS-accredited practitioners to be adept at developments in information literacy, integrated library systems, and data management. Regardless of the extent of education one receives, affinities for and skill with technology vary; racism and prejudice are possible and ever-present; and the notion of professionalism itself is an epistemic trap entrenched in white supremacy (Brook et al. 2015; Cooke and Kitzie 2021; Drabinski 2016; Ettarh 2018; Hathcock 2015; Morales and Williams 2021; Nault 2023; Patin et al. 2021; Sullivan 2016). For LIS workers, graduate study is meant to inform their development of collections, encourage interaction with the field on an academic level, propagate approaches to engaging with community members, and help them discern the operation of institutions, among other aspects that happen to fall under the episteme of professionalization (Cooke and Kitzie 2021; Drabinski 2016). The LIS curriculum is specifically meant to expedite the very expensive time spent in graduate school; there is unfortunately little time and money available for students to explore other relevant intellectual horizons. Beyond the granting of access to university libraries, more immediate connections to academic faculty for networking, and a milieu of other students, library [End Page 598] school is for those who can afford to spend the time and money to become upwardly mobile within the discipline.
The American Library Association's accreditation of LIS master's programs is meant to provide a sense of stability as a practitioner. The ALA and the Society of American Archivists, as the major American professional organizations for LIS workers, have a responsibility to cultivate policy that speaks to the expressed needs of organizational members and practitioners. The idealistic need for expedient progress should not usurp our supposed desire to engage in sustainable practices (Glassman 2017). In contending with generative AI's unyielding impact on climate change, my argument for refusing generative AI is grounded in taking this specific implication seriously.
The Center for the Future of Libraries, an ALA advisory group, defines artificial intelligence as a trend in technology that "seeks to create 'intelligent' machines that work and react more like humans" (ALA, n.d.). There is no definition of "intelligent" or "intelligence" here; the web page provides more of a survey of resources for users to access. There is also no mention of how generative AI may affect workers in libraries and archives; the focus seems to be more on how we help community members than how we as workers can process these changes. Major organizations and practitioner-supported symposia have sought to fill in these gaps, for example, the International Federation of Library Associations and Institutions (2020) and Libraries 2.0 (2024). Institutions have relied on the limited bandwidth of workers to produce conference presentations, articles in association magazines, and internal programming as the mechanism for education among workers concerning generative AI. While this is work that we do together, disparity of resources privileges those who have the time, ability, and money to produce educational content and thus reinforces particular narratives about the supposed propensity of generative AI.
The ALA defines sustainability as a core value as "making choices that are good for the environment, make sense economically, and treat everyone equitably. Sustainable choices preserve physical and digital resources and keep services useful now and into the future" (2024). Choices that are good for the environment include not using generative AI, such that the processing power required by generative AI tools worsens climate change by increasing carbon emissions and creates more electronic waste (Hao 2024; Mazzucato 2024; Pendergrass et al. 2019; Ren and Wierman 2024; Strubell et al. 2020). Massive data centers are built—and by all accounts will continue to be built—to maintain the servers that power generative AI and the storage of datasets required to train GANs. Major tech companies are competing for both land and subsea ownership to extract energy to support the continuous flow of data (Ren and Wierman 2024; Sutherland and Bopp 2023). This maintenance requires an extensive amount of [End Page 599] cooling power necessary to maintain their life cycles (Hogan 2015; Strubell et al. 2020). Researchers at the University of California, Riverside, found that the "training [of] GPT-3 in Microsoft's state-of-the-art U.S. data centers can directly evaporate 700,000 liters of clean freshwater" (Li et al. 2023). The preservation of digital resources itself, not even considering the inclusion of the energy consumption of generative AI, also involves the care of and attention to servers and data storage, which do and will continue to have an impact on the environment (Pendergrass et al. 2019). Our work in producing more digital content, including the publishing of academic journal articles, also impacts energy consumption. Even though it is unlikely that these large data centers are under our stewardship, as workers who are guided by the principle of sustainability, it seems illogical to engage with technologies that exponentially increase the use and evaporation of massive amounts of energy and water if we can avoid it.
Further, ALA's definition of sustainability includes what is economically sustainable; one can assume this entails that we remain conservative in our spending, when we hardly have any spending power to begin with. If we rely on generative AI to produce content for us, it communicates to fiduciary bodies that our labor is not worth funding and can therefore be outsourced. In this dynamic, tasks that aid in job training are minimized and at worst eliminated. While an argument could be made that a similar situation occurred when catalogs went digital—among other examples of technology altering LIS work—I would argue that there is a difference between the unlocalizing of records supported by association-wide strategy and exponential attempts to reimagine history, outsource knowledge, and extract energy by private interests (Glassman 2017).
The Society of American Archivists' "Core Values of Archivists" statement takes a more deliberate approach to sustainability: "As stewards of the historical record, archivists should be mindful of the ways in which their professional work can function both as harmful force and reparative resource" (2020). After discovering that a submission to The American Archivist's Reviews Portal contained evidence of ChatGPT usage, the journal's Editorial Board published a statement on the use of generative AI in their journal, defining norms and responsibilities they expect submissions to the journal to abide by ("American Archivist Generative AI Statement" 2024). The Editorial Board will accept the use of AI only in the case of checking grammar and spelling; any utilization of generative AI to produce fabricated or false content and/or citations will be summarily rejected, and authors must disclose whether they use AI at all ("American Archivist Generative AI Statement" 2024). The Editorial Board designated a date for further review of this policy, which points to a productive awareness of how quickly the technology is changing and an understanding of the stakes at hand regarding generative AI.
Beyond what American national associations indicate are best practices, [End Page 600] as LIS workers we should take heed from journalists and researchers who do not take the tech industry at face value. Critical librarianship and critical archives studies provide a guiding framework for empowering ourselves to refuse generative AI in our workplaces. Whether or not one chooses to take a critical stance, the planet is warming, and industrial sources continue to exacerbate the change in climate by increasing carbon emissions and privileging innovation over the safety of individuals, communities, and knowledge, as well as the viability of the planet.
Conclusion
There is a disconnect between LIS and criticality regarding generative AI. I argue that a serious engagement, along our institutional principles, concludes that a refusal of generative AI in our work is necessary. Experts in computer science, communication, media studies, and sociology have articulated, and continue to articulate, why generative AI is a trend, actively harmful to the integrity of information access, and a disaster for the continuous warming of the planet. Workers in cultural heritage institutions face tightening budgets and ever-increasing precarity of labor. This article does not need to blame workers for turning to generative AI in stringent times of need; rather, it understands that the harms are facilitated by power- and money-hungry corporations that control technological infrastructure.
This article deconstructed the term "AI literacy." It encouraged a critical orientation to engaging with technology, particularly through a framework that does not conveniently forget the LIS tradition of neutrality in favor of cutting corners and fetishizing innovation. I argue that we should refuse to use generative AI because it is a "destructive force" that is infrastructurally racist, marketed as a solution to increasing socioeconomic disparity while gleefully scraping all possible traces of digital and digitized labor, does not magically produce information with integrity without requiring further human scrutinization, and creates an unnecessary excess of energy emissions that is detrimental to our ecosystem (Fox 2024; Ren and Wierman 2024). Experts warn that the standard models of generative AI will collapse—only so much new information can be scraped, trained, and synthesized (Orf 2023). Instead of accepting and integrating generative AI, we should take inspiration from Julia Glassman's articulation that "public commitment to prioritizing reflection and meaningful practices over chains of impressive-sounding achievements could serve to open up alternative avenues for professional development and recognition" (2017), such as the methodological developments in the emerging fields of critical AI studies and critical data center studies (Edwards et al. 2024; Kempt and Heilinger 2024; Offert and Dhaliwal 2024). We ought to consider that active refusal of AI is a specific choice, one made to undermine attempts at technocratic control by corporations, militaries, and powerful [End Page 601] institutions (Gebru and Torres 2024; McQuillan 2022; Mueller 2021; Ongweso 2024; Simpson 2007). I am of the belief that generative AI is a fleeting trend that ultimately cannot and will not replace the labor of workers in LIS and that we as a field of practice should take a critical view toward the technologies we use every day for the benefit and safety of ourselves and our communities.
Kailyn "Kay" Slater is a public library worker living in Chicago. Slater has an MA in communication from the University of Illinois Chicago and received an MSLIS from the University of Illinois Urbana-Champaign in 2024.




