Beyond Information Literacy:Exploring AI's Impact on Labor in Academic Libraries

Abstract

This semi-autobiographical essay explores the impact of artificial intelligence (AI) on academic libraries and in particular, information literacy instruction. In exploring the ethical implications of AI and the importance of (re)imagining how AI will affect labor in academic libraries, the author utilizes sociological and historical approaches. The author also reflects on the parallels between past and current technological disruptions and transformations.

Keywords

AI literacy, AI ethics, artificial intelligence, Black Feminism, Marxism, labor, technology, academic libraries, responsible operations, adult learning principles

Introduction

In July 2023, I wrote an essay about Betty J. Blackman, "Betty Joan Black-man: Embracing Life Outside the Safe Zone," which was published on Loyola Marymount University's William H. Hannon Library's newsletter blog (Murph 2023). Blackman served as the first secular, Black, woman library director of the Von der Ahe Library (now the William H. Hannon Library) from 1979 to 1986. From 1986 to 1999, Blackman became dean of the university library at California State University, Dominguez Hills. She was the first Black dean of a university library in the entire California State University system. My essay centers on my family connection with Blackman and her contributions to a library where we have both worked and to librarianship, especially for Black, Indigenous, and people of color information workers. [End Page 609]

During my research in our library's archives, I came across committee minutes dated January 10, 1986, for a chapter of the California Women in Higher Education, a committee Blackman served on and acted as secretary for. There was a section where Blackman reminds the committee members of "microcomputers." A few weeks later, in minutes dated January 30, 1986, Blackman records, "Betty said she thought the committee should decide on a speaker to talk about computers on campus" (California Women in Higher Education 1986). The committee then had a brief discussion of another matter and came back to the topic of microcomputers, making a decision that "Betty should ask Lee Mendel-Figuroa [Academic Computing Systems manager] to speak on impact of microcomputers in general, what staff can do to prepare for their use, what type of training they can expect, and any information that will help them become more at ease with computers in their work places" (California Women in Higher Education 1986).

Blackman was keeping the urgency of technological changes at the forefront of discussions: What are the university's role and the library's role in this new technological development? With microcomputers coming onto the scene, Blackman presented her paper "The Academic Ministry, the University Library" in 1986, in which she challenges those concerned "to ask questions about their roles and questions about the future fraught with technological and demographic changes." Blackman points out that the "effectiveness of the library requires that we become knowledgeable and active participants in addressing the critical issues in higher education" (Wilken 2006, 20). These critical issues were trends such as computers in the classrooms and their direct impact on libraries (Wilken 2006). Approximately thirty-six years later, in 2023, ChatGPT became the newest form of technological disruption. ChatGPT is not an isolated technological event but, in fact, representative of the ongoing advancement of artificial intelligence (AI) technology. Like microcomputers in the 1980s, AI is here, evolving, and transforming our lives.

Since ChatGPT's introduction, there have been active discussions in academic libraries on generative AI's impact on areas such as information literacy and library services. Currently, there is minimal discussion on generative AI's effect on labor in academic libraries. I think about Betty Blackman and her urgency to colleagues on preparing for microcomputers and how they would disrupt and transform how students would study and how people would work. This essay is a semi-autoethnography piece using historical and sociological approaches to explore how generative AI is transforming the relationship between labor and technology in academic libraries; it concludes with a call for librarians to center an ethical framework in their explorations in adapting AI technology in research practices, instruction, and work related to a library's day-to-day operations. [End Page 610]

Generative AI as Disruption

AI is not new and ChatGPT is not the first technological disruption for academic libraries. The International Federation of Library Associations and Institutions (IFLA) Artificial Intelligence SIG reminds us that "we are already rather familiar with many of its applications in auto-suggestion, spam filtering, plagiarism detection, audio transcription, text summarization and translation," and in the context of libraries, there is "Text and Data Mining (TDM), and the application of machine learning to library and archive collections in the digital humanities can be seen as AI" (2023). During the first few months of ChatGPT's introduction, I attended a handful of the online conferences and webinars on generative AI tools such as ChatGPT that immediately sprang up. During some of the conversations among attendees, I heard and read (in the comments) people making comparisons to previous disruptions such as the introduction of the internet, Google, and Wikipedia.

I know one of the reasons people make these comparisons is to calm their (or others') anxiety about generative AI by looking toward the past for guidance. I am reminded of when I was younger and not yet working in libraries, reading in the news that the introduction of the internet (and later Google) was the end of libraries. Yet libraries pivoted and adapted accordingly. I see that up close now that I am working in libraries. I recall Betty Blackman and her encouragement that we embrace microcomputers. Similarly, Wikipedia was unnerving to people, especially instructors who had concerns about the quality of information that Wikipedia provided. Both Google and Wikipedia changed the way people had access to and searched for information. Now, Wikipedia and Google are seen as valuable tools to supplement research. Wikipedia is used as a generalized reference tool, while reference and instruction librarians teach students how to utilize Google Scholar as a reference tool, while also using features such as Google Forms for instructional activities. There are similarities in these technological disruptions. Each time, librarians pivoted and adapted.

At the same time, generative AI as a disruption is different. ChatGPT has led to a revigorated interest in AI and "also a re-evaluation of how it is defined and the anticipated professional implications" (IFLA Artificial Intelligence SIG 2023). Generative AI is evolving to imitate human intelligence and becoming smarter, and it will continue to change our relationship with technology, as did microcomputers in the 1980s, not only how we work but also in its impact on labor (Okunlaya et al. 2022, 1870–71). I argue that in this moment, we do not see enough discussion of generative AI's impact on labor in libraries, especially academic libraries.

After participating on a panel in January 2022 that discussed the works of scholar and activist Angela Y. Davis, I revisited Karl Marx's Capital (1952). [End Page 611] Another panelist, a political scientist, noted that beyond taking on a political ideology, Davis studied Marxism to learn how labor is structured and its relationship with the employer and with technology ("the machine"). A specific point my faculty colleague made stood out to me. Historically, in attempts to maintain the status quo, those in power used a political scare tactic to frighten society with the ongoing "Red Scare" of Communism, Marxism, and Socialism. Today's far-right movement, specifically Trumpism and MAGA (Make America Great Again), continues with this strategy. I was reminded that beyond just being a "scary" political ideology, Marxism is a framework for us to utilize to understand the intricacies in the relationship between labor, the employer, and technology. Karl Marx's analysis on the capitalist system, specifically on machines and labor, provides a framework to understand the impact AI will have on labor.

Since Marx's publication of Capital, scholars have continued to research the relationship between labor and technology. Wendling (2009) provides an updated view on how Karl Marx evolved in his ideas on labor, alienation, and technology in how human beings are continuous with nature and not apart from nature. Studying Marxism is complex and not an overnight endeavor, but it captures how work for a laborer had a different meaning in lived experiences before industrialization when craftsmanship dominated (Wendling 2009). The "work experience is radically transformed by the widespread introduction of machines in production" (Wendling 2009, 62). As a result of industrialization, the human intellect "becomes increasingly promoted as the sphere of the properly human" (Wendling 2009, 62). As Wendling (2009, 62) points out, the meaning of work is the intellectual operations in creating and maintaining the machines. Machines are responsible for providing increased efficiency and maximum productivity. Marx argued these points in the nineteenth century, and they remain true in the twenty-first century.

Davis applies Marxist analysis and intersectionality in her studies on the relationship between class, race, and capitalism in Women, Race and Class (1981). Davis argues an intersectional point of view on how Black and Brown women working in domestic labor, for example, should be part of the discussions in both the labor and women's suffrage movements. Since the industrial revolution caused a "structural separation of the home economy from the public economy," "housework cannot be defined as an integral component of capitalist production" (Davis 1981, 234). This justified not paying women for their domestic work (including childcare and elderly care) and not positioning this work on an equal level to the work done, particularly by men, outside of the home (Davis 1981). For Black and Brown women, they experience not only sexism but also racism and classism (i.e., white women who are "housewives" are viewed differently than Black and Brown domestic workers). By doing so, Davis adds another [End Page 612] dimension to the conversation on labor inequities to include women, especially women of color.

Davis is also an example of looking at labor, technology, and intersectionality from a Black Feminist Perspective. Lelia Marie Hampton argues that an analysis of technology such as generative AI "requires a decolonial Black feminist lens attending to race, gender, capitalism, imperialism, legacies of colonialism and so on to understand the centralization of power that allows for the extraction, exploitation, and commodification of oppressed people" (2023, 119). The core of the Black Feminist Perspective is Black women's lived experiences with all forms of oppression: racism, sexism, and classism (Schelenz 2022). For Black women, there is a consciousness that takes on all forms of oppression to achieve social justice (Schelenz 2022). Patricia Hill Collins states that Black women who are engaged in Black Feminist research and scholarship know these issues always "affect both contemporary daily life and intergenerational realities" (2022, 47). Black Feminism focuses on the structural, with emphasis on the systemic dynamics rather than individual experiences (Schelenz 2022). Through these lenses, we can explore not only the impact of AI on labor in academic libraries but academic libraries' positionality in their relationship with AI technology.

AI Literacy

As a reference and instruction librarian, I teach one-shot instruction on information literacy in both subject classes (i.e., history, sociology, screenwriting) and Rhetorical Arts classes. At Loyola Marymount University, first-year students are required to take a Rhetorical Arts class. Recently, my colleagues and I added a discussion on generative AI to our library instruction. One of the items that we discuss with students is how generative AI can be both beneficial and harmful for people and the environment. On a large screen, I show students a slide listing some of the benefits and risks of generative AI. By centering ethical implications, I use this slide to discuss the nuances of AI technology. AI technology does not fit into a nice, perfectly sized box; it is messy and has many gray areas. For libraries, AI can "enhance access to knowledge" and could transform library spaces into smarter library spaces (IFLA Artificial Intelligence SIG 2023). At the same time, what is beneficial for us and our patrons is harmful for other people and the environment. In this case, users in the Global North benefit from generative AI on the backs of those in areas such as the Global South and Africa whose labor is exploited to develop AI tools. For example, OpenAI contracts to a San Francisco–based company, Sama, that employs workers in Kenya whose job as data labelers is to clean out violent content that the machine gathers as part of its training. They are paid less than $2.00/hour (Perrigo 2023). Yet, here in the United States, a partnership between [End Page 613] OpenAI and Arizona State University will give the faculty, staff, and students full access to an advanced version of ChatGPT (Coffey 2024).

As Marx argues, "Thus we see that machinery, while augmenting the human material that forms the principal object of capital's exploiting power, at the same time raises the degree of exploitation" (1952, 193). From a Black Feminist Perspective, this requires a person to dig deeper and look at the multiple intersections of colonialism—specifically, digital colonialism—racism, and sexism. For example, OpenAI's contract with Sama in its exploitation of Kenyan workers. None of these forms of oppression are isolated but, instead, intersect with and inform the others.

As a reference and instruction librarian who is of Afro-Mexican descent and a cisgender female, what immediately stood out to me was Arizona State University's plans to "create AI avatars that can serve as study buddies for students. The university plans to create a personalized AI tutor with a focus on STEM topics" (Coffey 2024). I shared the article with my colleagues and highlighted this portion. AI could transform the services a reference department provides such as research consultations, chat reference, and so on. When I teach Rhetorical Arts classes, I show first-year students the various ways to get help from the library, including our chat service and meeting one-to-one with a librarian. When students use our chat service, I emphasize that they are talking to a live librarian—a librarian who has experiences that can be beneficial in assisting students with their research that ChatGPT does not have. When I read about a university's plans for a personalized AI tutor, I thought about our work as librarians. For example, we can help students brainstorm on a potential research topic and locate sources, but now an AI tutor in the form of an avatar can do the same thing.

One concern many have with generative AI is the fear of losing one's job to the machine. Librarians are no exception. If generative AI is going to be the dominant technology in library operations and spaces, what will that look like? Libraries have pivoted and adapted in response to technological disruptions before, but with generative AI, it does feel different. Since ChatGPT went mainstream in late 2022, I noticed that conversations on AI literacy for staff, as opposed to students, were minimal. This is an ethical concern when it comes to labor, specifically for staff in academic libraries. I use the term "staff" to include both librarians and support staff. First, not all academic librarians have faculty status. At my library, we do not have faculty status. We are hybrid, in that we are categorically staff but invited in faculty spaces. Second, to maintain a library's operations, it involves both librarians and support staff. Professional development such as training on AI tools and workshops on ethical implications and AI is important to ensure staff are not left behind. As a starting point, it is important to invest in reskilling staff.

This brings to the forefront the importance of AI literacy not just for [End Page 614] students but for staff and faculty. AI literacy can be defined as "a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace" (Wheatley and Hervieux 2022, 63). When I teach students the ethical implications of generative AI, I tell them that they are adults and they will use AI in whatever form it is in, but what I can do as a librarian is teach them the ethical components to the technology so that they are aware of their own autonomy and responsibility when using AI. This includes how AI works under the hood, who benefits from it, who and what are exploited by it, its impact on the environment, and how it challenges us to consider how we can do better. In the context of labor in academic libraries, providing the necessary professional development for AI literacy for staff is important so they are, on a foundational level, informed about the technology.

In fall 2023, I conducted a workshop along with two colleagues, Susan Gardner Archambault and Shalini Ramachandran, titled "AI Explorers: Discovering the Power and Pitfalls of ChatGPT." Our workshop contained three parts: learning how ChatGPT generates output, ethical implications of AI, and prompt engineering using the CREATE framework. Although outreach for the workshop targeted students, we noticed that only a few students and faculty attended, and most attendees were staff from various units on campus, including some colleagues from Information Technology Services (ITS). We repeated the workshop in spring 2024 and had similar attendance, with most attendees being staff. The noticeable presence of staff in these workshops showed their desire for AI literacy.

What if we built on what we have created thus far for this workshop and made it available to train our own library staff? In a study on the implementation of the GPT-4 Exploration Program at the University of New Mexico, Lo (2024b) developed a program to help staff learn about AI. Drawing on Malcolm Knowles's principles on adult learning, the program emphasized self-directed learning, leveraging learners' prior experiences, and a problem-centered orientation. For example, the program encouraged staff to tailor their learning about AI to their roles and responsibilities within the library, thus tapping into internal motivation and understanding the reason for learning the significance of AI.

I appreciate that Lo (2024b) utilized Knowles's principles because this framework gives autonomy to the library's staff and aligns with adult literacy practices. Library staff bring experiences and knowledge that can contribute to conversations regarding the use of AI, particularly staff from underrepresented communities in the library field. Knowles's principles also provide space for staff to challenge how and why AI is being used. This can force a discussion on how the technology will alter their work and/or how they work. As Marx argues, the worker's relationship with technology will change, in both how they work and the work they produce. [End Page 615]

I was introduced to Lo's research on labor in academic libraries this past spring through a virtual conference. Prior to seeing Lo speak, I kept asking my colleagues (and inserting into conversations): How will AI affect the work of staff? We talk about the benefits and risks AI can offer for a library's operations and for instruction, but how will it change the work the staff produces? How will the technology impact jobs? I received minimal responses, I think primarily because no one really knows. In addition, people are already maxed out by their workloads and find it challenging to devote time to a deep dive on the subject, despite the fact that the subject is a huge ethical concern for many.

It was refreshing to learn about Lo's research on labor in academic libraries through the lens of AI literacy. My initial thought was "finally," although the topic is still not discussed at the lengths that it should be. I follow Lo on LinkedIn and can see his excitement about AI overall. I am more cynical about AI than he is, but I am also realistic. Like Betty Blackman with the microcomputers, I recognize the need for AI literacy for library staff to ensure they do not fall through the cracks and get pushed out of not only their jobs but also the conversations surrounding AI. It is crucial that we train employees on how to use AI so that they understand how AI works and can "use ethics correctly and to prioritize critical thinking so their maximum potential is obtained in the work they do" (García-Peñalvo 2023, 4).

Some of my colleagues are using tools such as ChatGPT and Microsoft Copilot to assist with work such as brainstorming ideas, drafting email communications, and helping with note-taking. This usage feels scattered though, because there is no formalized and centralized training for our staff. Lo argues this point: "AI tools have yet to become a staple in library work. The majority of participants do not frequently use these tools, with 41.79% never using generative AI tools and 28.01% using them less than once a month" (2024a, 643). In addition, there is no consistency in awareness of how to use AI ethically. Reskilling employees needs to encompass both technical aspects and ethical implications.

At work, I observed that one department in the library uses Zoom's AI companion to record minutes for meetings and one of their staff wants to use the same tool to record minutes for a committee meeting with representation from different departments in the library. However, these meetings are very different in nature. The first focuses on brainstorming ideas for library events, while the latter centers sensitive discussions on DEIA (Diversity, Equity, Inclusivity, and Antiracism/Accessibility). The suggestion for AI-based note-taking focuses on efficiency, but what is overlooked is the ethical concern of privacy. In addition to the conversation being recorded word for word by a machine, it is saved in a cloud where it can be accessed by those who are not involved in the meeting directly, such as ITS. The alternative would be to have an actual human take the minutes [End Page 616] for a meeting that discusses sensitive topics who is mindful of the power dynamics among committee members and with library leadership. Human note-taking gives those in the meeting more control of what will or will not be in the minutes as well as how the minutes will be archived. This is an example for the need of AI literacy for staff to learn to critically evaluate and decide the effectiveness of an AI tool and when to use it appropriately (Lo 2024b, 6).

I continue to observe why my colleagues use AI tools and how they are using these tools. Some of my colleagues are curious and want to learn how to use a specific AI tool, such as ChatGPT. The motivation is the hope for efficiency and saving time. Other colleagues need to learn how to utilize AI technology because their supervisors advised them to do so for the same reasons. Browsing LinkedIn, I have read posts by peers, especially those who are in managerial roles, asking the same thing of their support staff. On one level, I get it to an extent. AI is new for a lot of people, thus the exploration and curiosity. At the same time, I find this to be unnerving because of the power dynamics at play as supervisors use support staff as a testing ground. This is especially true when the supervisors are white, male, and/or cisgender and the support staff are people of color and/or otherwise marginalized.

In addition, we might ask whether technology has saved us time or whether it has "saved" us time doing some tasks just to add more new work or expectations to get our jobs done in a faster time frame. Marx (1952) writes about the shortening of the workday and productiveness, especially with machines cutting down on time doing a task. For example, a task that originally took two hours to do now takes one hour to complete with the assistance of a machine. Does that mean a shortened working day for the worker, with adequate pay to live comfortably? Hanna-Barbera's The Jetsons comes to my mind. With the advancement of the machine, George Jetson works as a digital index operator pushing a button (literally!) one hour a day for two days a week. He goes home to his family, which includes a robot maid, Rosie, and the family dog, Astro. George obviously receives full pay equal to someone working forty hours a week. He and his family do not struggle and live comfortably with all their needs met (food, housing, education, etc.).

Marx (1952) argues that the shortening of the workday for the worker does not benefit the capitalist at all. As Marx points out, "The shortening of the working day is, therefore, by no means what is aimed at in capitalist production when labour is economized by increasing its productiveness" (1952, 156). In higher education, now that it is corporatized, the same thing applies. With all the excitement and talk about AI and the stylized commercials on how it will help workers get certain tasks done quicker and allow extra time to relax, focus on important tasks, and so on, the truth is, and it has always been the case, the technology will just place [End Page 617] workers in a position to do even more in the same time frame as before. Eryk Salvaggio argues that this is a productivity myth, which "suggests that anything we spend time on is up for automation—that any time we spend can and should be freed up for the sake of having even more time for other activities or pursuits—which can also be automated" (2024). Compared with the reality we live in, The Jetsons got it right.

For academic libraries that are housed in their parent institutions, AI literacy can position a library in the institution-wide conversation (IFLA Artificial Intelligence SIG 2023). At the university where I work, since the release of ChatGPT, AI continues to be a disruption for our campus community. As a library, we are in the process of educating ourselves on AI, just as other units on campus are doing the same, in particular ITS. Yet there is no official statement(s) from our university's senior leadership in response to concerns and questions the community has. Rather, senior leadership bunts it back to the individual units on campus. The two campus units that are natural stakeholders in the conversation are the library and ITS. The latter is now officially recognized as a natural stakeholder by senior leadership, while the library is not. The IFLA Artificial Intelligence SIG points out that positioning both ITS and the library as major stakeholders is crucial because "as a female-majority profession, librarians can play a special role in counter-balancing the impacts of gender bias in the wider IT industry" (2023).

The library profession employs more women (82.5 percent) than men (17.5 percent) and more whites (81.2 percent) compared with Black or African Americans (7.0 percent), Asian Americans (5.5 percent), and Latine/Hispanics (11.1 percent) (US Department of Labor, Bureau of Labor Statistics 2023). The IT industry employs more men (77.4 percent) than women (22.6 percent) (US Equal Employment Opportunity Commission, 2024). It also employs more whites (59.9 percent) compared with Asian Americans (18.1 percent), African Americans (7.4 percent), and Latine/Hispanics (9.9 percent) (US Equal Employment Opportunity Commission 2024). Thus, both the library and IT fields have a diversity problem. A breakdown of data such as this aligns with the Black Feminist Perspective of digging deep to have a better understanding of where the power is held and not held. Having both the library and ITS included as stakeholders can be an opportunity for both areas to acknowledge their lack of diversity, recognize their attempts in trying to diversify their respective fields, and provide space for those from underrepresented communities to (safely) contribute to the conversation about the impact of AI technology on the workforce.

An academic library as a crucial stakeholder can also counterbalance ITS as an access point for what Ruha Benjamin (2024) refers to as "dominant imaginaries." Dominant imaginaries are the tech titans and billionaires such as Satya Nadella, chief executive officer of Microsoft; Tim Cook, [End Page 618] chief executive officer of Apple; and Sam Altman, chief executive officer of OpenAI. Dominant imaginaries continue to create the societal structures that they imagine, or as Benjamin describes: "Listen carefully and you'll hear how these new-ish stories gloss over an unsavory subtext: That the rich, powerful, and pedigreed know what is best for all" (2024, 18). These dominant imaginaries are the "self-appointed stewards of humanity work to colonize the future" (Benjamin 2024, 18). To colonize our future. Higher education is one avenue for powerful entities such as Microsoft and Google to implement their own visions of the future. These dominant imaginaries infuse their own biases in the algorithms, software, and platforms they create, and campus units such as ITS are one of the major access points for Big Tech in framing how education should be operationalized and how our students should be educated. An academic library can serve as a needed antithesis to these dominant imaginaries.

I have attended faculty listening sessions on ChatGPT and heard from some of the faculty outside these listening sessions of their concerns about AI. For some faculty, ITS is a salesperson on behalf of Big Tech who will impose the use of AI on faculty. The university provides the venue to do so. Faculty also have concerns related to privacy and the devaluing of critical thinking skills. On the other hand, faculty view the library as meeting a higher standard. We are not that salesperson but the protector of privacy, providing credible access to information and information literacy. The library can bring a focus on the ethical implications of AI to the conversation.

AI is racist and is not neutral or fair, but the way it is designed, such as its natural language capability, the machine learning algorithms that generative AI uses to create new data, and the way it is marketed by Big Tech, gives the impression that AI is fair. In Algorithms of Oppression: How Search Engines Reinforce Racism (2018), Safiya Umoja Noble exposes how algorithms can be racist. Noble traces this racism and sexism back to the dominant imaginaries, who are primarily white and male and whose perspectives and imaginations are embedded in the design architectures and algorithms of AI (Schelenz 2022). Noble (2018, 26–27) highlights a growing movement among scholars and activists to sound the alarm on the impact of AI and how this technology will further social injustices and structural racism. Deep machine learning, which uses "algorithms to replicate human thinking, is predicated on specific values from specific kinds of people—namely, the most powerful institutions in society and those who control them" (Noble 2018, 29). One of those powerful institutions is Google. The global dominance of Google continues to worsen digital inequities and deepen global economic divides (Noble 2018, 28). Following along the Black Feminist Perspective, Noble argues that what is missing in the study of Google is the need for an intersectional power analysis to account "for the ways in which marginalized people are exponentially [End Page 619] harmed by Google" (2018, 28). Through this lens, artificial intelligence is a human rights issue (Noble 2018, 28). Academic libraries can challenge those dominant imaginaries through the value of AI literacy and adapting Rumman Chowdhury's concept of "responsible operations" as guidance (Padilla 2019, 7). Responsible operations "refers to individual, organizational, and community capacities to support responsible use of data science, machine learning, and AI" (Padilla 2019, 7).

Coming back to my institution, it is important to note that ITS and the academic library should not be the only two stakeholders in the conversation. Student, staff, and faculty affinity groups that represent Black, Latine, LGBTQ+, and Asian Pacific Islander Desi American communities along with other groups on campus should also be stakeholders. As of the writing of this article, ITS recently provided institutional access for faculty, students, and staff to a handful of AI programs; developed an introductory workshop on AI, specifically on Microsoft Copilot; and began hosting monthly virtual open office hours for staff and faculty. Our library is still not recognized by senior leadership on the same level as ITS, despite the fact that we are already conducting workshops on ChatGPT; integrating AI literacy, including the ethical implications, in library instruction for students; and continuing research on AI. Regardless, we continue to promote the work we are doing and ensure we are at the table for all conversations related to AI. Librarians' collective experience is giving us a better sense of how we can position ourselves in this conversation by focusing on AI literacy and the ethical implications of AI technology.

From Disruption to Transformation

In time, we will go from disruption to transformation—and then to another round of digital transformation. As Karl Marx reminds us, the worker's relationship with technology will change. As part of the transformation in the workplace, there will be a fundamental rethinking of processes, changing competencies, and organizational culture and structure (IFLA Artificial Intelligence SIG 2023). This can be an opportunity for academic libraries to reflect upon and incorporate responsible operations on AI (Padilla 2019).

Responsible operations are grounded in shared ethical commitments and are where academic libraries can evaluate their relationship with AI technology (Padilla 2019, 8). Padilla (2019, 8) refers to Luciano Floridi and Joshua Cowl's "A Unified Framework of Five Principles for AI in Society" in developing and using AI:

  • Beneficence—Promoting well-being, preserving dignity, and sustaining the planet.

  • Nonmaleficence—Privacy, security, and capability caution.

  • Autonomy—The power to decide. [End Page 620]

  • Justice—Promoting prosperity, preserving solidarity, and avoiding unfairness.

  • Explicability—Enabling the other principles through intelligibility and accountability.

Libraries should reflect this transformation due to the advancement of AI technology in their strategic plan. Strategic plans show commitment—in this case, commitment to allocating "sufficient resources to support ongoing AI reskilling initiatives" (Lo 2024b, 6). It is also a commitment toward designating time and space for library employees to learn and experiment (Lo 2024b). Incorporating AI skills using this framework in the hiring criteria sets a baseline for understanding AI and the impact AI has on labor (Lo 2024a). This is an ethical concern for both the university and an academic library. Lo (2024b) stresses the importance of investing in the ongoing professional development of staff to ensure that no one falls through the cracks, therefore falling behind with the possibility of losing their jobs and livelihoods. Instead, libraries need to be proactive in "keeping the workforce adaptable and competitive in a rapidly changing landscape" (IFLA Artificial Intelligence SIG 2023). Academic library employees come from diverse backgrounds and life experiences, and their levels of comfort and trust (in both AI and the institution) vary. Lo (2024b, 2) points out that reskilling programs need to account for differences and recognize employees' own autonomy.

Ethical Implications

Netflix CEO Ted Sarandos is quoted in a recent New York Times interview saying, "A.I. is not going to take your job. The person who uses A.I. well might take your job" (Garcia-Navarro 2024). There is truth in this statement, in that workers need to be AI literate to be competitive in the job market. On the other hand, the statement is problematic because it speaks to the hyperindividualistic society we live in by justifying the presence of winners and losers in the job market. Benjamin (2024) refers to a concept created by philosopher Herbert Spencer in his 1864 book Survival of the Fittest. Survival of the Fittest is the "'preservation of favored races [i.e., whites] in the struggle for life,' which helped justify racial hierarchies as a natural byproduct of cutthroat competition between those who are strong and those who are weak" (Benjamin 2024, 17). This myth continues to this day like a cancer. With AI technology, for people like Ted Sarandos, it is about the survival of the fittest.

It does not have to be this way and should not be this way. While I am teaching ethical implications and generative AI to first-year students, I tell them: We can do better. If AI is going to be here, how can we do better with this technology? How can this technology be decolonized where it is community owned and community focused? Can this technology be developed [End Page 621] without worsening the planet? If not, do we have to rely on AI? There are other ways and methods of support that have been around for a lot longer than AI, not to mention microcomputers. Disruption, the need to adapt and learn, and resistance to disruption are experiences that we share with nineteenth-century society during their time of industrialization. I tell students about ChatGPT: Challenge it. Do not accept the information it gives you every time on face value. Do your research and challenge it. See how it responds and have that conversation. I remind students that the reason I am saying this to them is so they can remember they have control of this technology, not the other way around.

As with students, the same applies for staff in reminding them of their own autonomy in relation to AI—and not only to the technology itself but to the power structures as well. As staff become informed and learn about AI, including what is under the hood and ethical implications, they will be in control of the technology. In doing so, individuals can set their own boundaries or choose not to use the technology at all. Staff do not have to follow the status quo established and maintained by what cultural historian Thomas Berry refers to as the four pillars: religions, governments, corporations, and universities (Benjamin 2024, 16–17). Scholars, librarians, and activists have shown that groups of people are purposely not included, and we can change that. Black Feminist scholars and activists have been arguing this point through an intersectional framework. Writer and activist adrienne maree brown discusses how emergent strategy "shows us that adaptation and evolution depends more upon critical, deep, and authentic connections, a thread that can be tugged for support and resilience" (2017, 14). Aligning with what I tell students regarding challenging the technology and that we can do better, brown reminds us that through emergent strategy, we can do this through radical ideas spread through conversations, questions, and one-to-one interactions. To make change, especially change for the betterment of humanity, does not always require something huge. We should not underestimate the small stuff, because that can have a big impact for change. In the context of AI and labor, no, the person who uses AI is not going to take someone else's job. We can change that narrative and thus change our behaviors from hyperindividualistic to community focused. The person who knows AI and the person who does not can all thrive at their jobs.

Academic libraries and their respective universities need to acknowledge and reflect this in their discussions and actionable items when implementing AI in their organizations. For example, Arizona State University's partnership with OpenAI benefits the university by helping it stay competitive in the field of technology and in higher education. Their relationship with OpenAI opens opportunities for research and resources including funding for the university. For the university, this is good. Ethically though, who is being exploited so another group can benefit? Knowing the history [End Page 622] of universities and tech companies like OpenAI maintaining white supremacist and classist structures, this time of disruption and transformation is an opportunity for universities to break from the old and (re)imagine how AI can benefit more than just a select few and not exploit other people and the environment. Starting with the campus community, how much decision power will staff, faculty, and students have in this relationship with OpenAI? Among those three groups, who is included in these discussions? Who is excluded? Lelia Marie Hampton stresses that "capitalist-imperialist companies and governments manoeuvre a forced adoption of their technologies that embed their ideologies of power, overpowering local technology markets and pursuing their insatiable profit motives at the expense of racialised communities which 'constitutes a twenty-first century form of colonisation'" (2023, 127).

Library services such as reference services will change and benefit the user. What about the changes in the work the library staff do in providing those services? I tell students, If you are using ChatGPT to help you brainstorm on a topic, something you could also do through a one-to-one human interaction with a librarian, what does that mean for the librarian's part of their work? If you can use an AI avatar to help improve your papers, what does that mean for the writing tutors in the Academic Resource Center? This technology is beneficial and convenient, but who is impacted so you can benefit from it? It will change the work we do, and what will that look like? Will staff get the training, continued professional development opportunities, and designated time they need to reskill and stay updated on the technology? In some of the research on reskilling library staff, "staff" usually refers to administrators and librarians. Support staff need to be included, especially if we are talking about "community." At Loyola Marymount University, our mission statement focuses on the development of the whole person. The campus community constantly reminds leadership of this, and with the development of AI in our workspaces, leadership is reminded of this again. As Lo (2024b) argues, staff will have diverse experiences, and there will be a mix of those ready to learn and those who are hesitant, but how do we work as a community to ensure our staff get the professional development they need for the work they do (or will be doing)? Just as important is staff having a voice in redefining the work they do, in addition to providing space for their growth and autonomy.

We should also consider work study students. Under the amended 1938 Fair Labor Standards Act, they are considered employees. Our library relies heavily on work study students. It speaks to how higher education has become corporatized that we rely on their low-wage labor rather than invest in appropriate hiring of full-time employees. On the other hand, students are attending an expensive, predominately white institution in a city with a high cost of living. Students need the money. How will AI impact their work? Will AI consolidate the work such that the library needs fewer [End Page 623] work study students, therefore resulting in fewer on-campus job opportunities for students? On the other hand, our library ensures that work study students receive training and constructive evaluation of their work. The purpose is to not only give students work experience but also mentor them in developing professionalism and skill sets and build their confidence. Reskilling work study students on AI technology should be part of the library's commitment through its own strategic plan. They too are employees of the library.

How do we feel, consciously, in partnering with a company such as OpenAI that outsources its data labeling of toxic content to a company that exploits Kenyan workers so the end users benefit (Perrigo 2023)? This is the global underclass, and they are the invisible labor (Gray and Suri 2019). The Kenyan workers are AI's janitorial staff. As Hampton argues, "A critical analysis of techno-racial capitalism requires an expansive discussion of the global economic landscape of AI. … To ignore this landscape is a betrayal of the racially oppressed workers of the world whose labour lays the foundation for AI as well as the people whose oppression is commodified through AI systems" (2023, 126). As I mentioned earlier, I remind our students that they are benefiting from AI off someone else's back who is being exploited.

Academic libraries need to be aware of their positionality in their relationship with AI technology as well their parent institution's positionality. We need to look at AI broadly and not from a tunnel vision viewpoint. Academic libraries and their respective parent institutions need to recognize and acknowledge that in using AI, we are participants of digital colonization that uses exploitative labor behind the scenes and is leaving carbon and water footprints on the environment (Kwet 2019). From a Black Feminist Perspective, this is all connected. Mining for raw materials such as lithium (batteries), cobalt (rechargeable batteries), and copper (wiring) leads to soil erosion, deforestation, water pollution, and displacement and destruction of communities and habitats (Furze 2023). The mining is furthering conflicts in regions such as the Democratic Republic of the Congo. Political upheavals, starvation, and even genocide are occurring. The natural environment and oppressed people are treated as "natural 'resources'" for AI (Hampton 2023, 120). Academic libraries need to ask themselves how they can contribute to imagining and organizing "for collective liberation and ecological sustainability" (Hampton 2023, 134). Libraries can engage people, including their own staff, as citizen scientists by soliciting feedback on the potential implementations of AI, shaping its systems design, and providing AI literacy (Hampton 2023). Is it possible to develop an AI system that benefits humanity and protects the environment? And use it to benefit humanity and protect the environment? Currently, AI has the potential to benefit individuals with accessibility needs such as learning disabilities. At the same time, AI is widening [End Page 624] the inequities gap. Libraries can help us move toward a technological ecosystem that does not reproduce a matrix of domination (Costanza-Chock 2020; Hampton 2023).

Conclusion

During Betty Blackman's tenure as library director at Loyola Marymount University, she applied for and was awarded a $100,000 grant from the Loyola Marymount University Foundation (Wilken 2006, 21). Blackman used the grant money for the library's first online catalog. Up until that time, the library worked with a card catalog. Fast forward to 2023, the William H. Hannon Library implemented a new online catalog system, Primo/Alma. I was having a conversation with Dr. Magaela Bethune, a faculty colleague in the Department of African American Studies, and I shared this information with her; her response: "She is an Afrofuturist!"

Afrofuturism incorporates Black history and culture, but it is also about having foresight, curiosity, growth, and empathy and reimagining a different world—both present and future—while connecting to the past. Blackman was an Afrofuturist. She had foresight on technological changes starting with microcomputers. She observed, listened, researched, explored, and experimented. With the library's first online catalog, in addition to microcomputers, Blackman advocated for consistent and ongoing training and professional development in reskilling the staff.

In the same presentation that Blackman gave asking the role of universities and academic libraries in technological advancements, she also argues the lack of diversity in the library field. Her presentation was intersectional in its analysis, and she did not isolate the advancement of technology from oppression and vice versa. As an Afrofuturist, Blackman challenged her contemporaries and those in the future to reimagine what this AI technology, any technology, can be. Who are included as stakeholders? Who are intentionally left out? Do the workers have a voice in how technology will redefine their job responsibilities? Blackman stresses that we need to be critically aware of who the dominant imaginaries are and their purpose and challenge them. She reminds us to think as a collective, a community. Technology should not benefit just one or even a few but, rather, benefit all of us.

Nicole Lucero Murph

Nicole Lucero Murph (she/her) is a reference and instruction librarian at the William H. Hannon Library at Loyola Marymount University. She teaches information literacy especially for subjects in the arts, film and television, and social sciences. Murph is driven in her advocacy against injustices, which informs her role as a teaching librarian. Her research interests are history, class and power, autoethnography, artificial intelligence ethics, Black Feminism, Black Feminist Thought, and Afrofuturism.

References

Benjamin, Ruha. 2024. Imagination: A Manifesto. W. W. Norton.
brown, adrienne maree. 2017. Emergent Strategy: Shaping Change, Changing Worlds. AK Press.
California Women in Higher Education. 1986. Meeting Minutes. Loyola Marymount University Archives, Los Angeles.
Coffey, Lauren. 2024. "Arizona State Joins ChatGPT in First Higher Ed Partnership." Inside Higher Ed, January 19. https://www.insidehighered.com/news/quick-takes/2024/01/19/arizona-state-joins-chatgpt-first-higher-ed-partnership.
Collins, Patricia H. 2022. Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. Routledge.
Costanza-Chock, Sasha. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press.
Davis, Angela Y. 1981. Women, Race and Class. Vintage Books.
Furze, Leon. 2023. "Teaching AI Ethics: Environment." Teaching AI: The Series (blog), March 13. https://leonfurze.com/2023/03/13/teaching-ai-ethics-environment/.
Garcia-Navarro, Lulu. 2024. "The Interview: The Netflix Chief's Plan to Get You to Binge Even More." The New York Times, May 25. https://www.nytimes.com/2024/05/25/magazine/ted-sarandos-netflix-interview.html.
García-Peñalvo, Francisco José. 2023. "The Perception of Artificial Intelligence in Educational Contexts After the Launch of ChatGPT: Disruption or Panic?" Education in the Knowledge Society 24:1–9. https://doi.org/10.14201/eks.31279.
Gray, Mary L., and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Harper Business.
Hampton, Lelia Marie. 2023. "Techno-racial Capitalism." In Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines, edited by Jude Browne, Stephen Cave, Eleanor Drage, and Kerry McInerney. Oxford University Press.
International Federation of Library Associations and Institutions Artificial Intelligence SIG. 2023. "Developing a Library Strategic Response to Artificial Intelligence." Last modified November 20. https://www.ifla.org/g/ai/developing-a-library-strategic-response-to-artificial-intelligence/.
Kwet, Michael. 2019. "Digital Colonialism: US Empire and the New Imperialism in the Global South." Race & Class 60 (4): 3–26. https://doi.org/10.1177/0306396818823172.
Lo, Leo S. 2024a. "Evaluating AI Literacy in Academic Libraries: A Survey Study with a Focus on U.S. Employees." College & Research Libraries 85 (5): 635–668. https://doi.org/10.5860/crl.85.5.635.
Lo, Leo S. 2024b. "Transforming Academic Librarianship Through AI Reskilling: Insights from the GPT-4 Exploration Program." The Journal of Academic Librarianship 50 (3): 1–7. https://doi.org/10.1016/j.acalib.2024.102883.
Marx, Karl. 1952. Capital. In Great Books of the Western World, edited by Robert Maynard Hutchins. University of Chicago.
Murph, Nicole L. 2023. "Betty Joan Blackman: Embracing Life Outside the Safe Zone." LMU Library News (blog), July 7. https://librarynews.lmu.edu/2023/07/betty-joan-blackman-embracing-life-outside-the-safe-zone/.
Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
Okunlaya, Rifqah Olufunmilayo, Norris Syed Abdullah, and Rose Alinda Alias. 2022. "Artificial Intelligence (AI) Library Services Innovative Conceptual Framework for the Digital Transformation of University Education." Library Hi Tech 40 (6): 1869–92. https://doi.org/10.1108/LHT-07-2021-0242.
Padilla, Thomas. 2019. "Responsible Operations: Data Science, Machine Learning, and AI in Libraries." OCLC Research. https://doi.org/10.25333/xk7z-9g97.
Perrigo, Billy. 2023. "Exclusive: OpenAI Used Kenyan Workers on Less than $2 per Hour to Make ChatGPT Less Toxic." Time, January 18. https://time.com/6247678/openai-chatgpt-kenya-workers/.
Salvaggio, Eryk. 2024. "Challenging the Myths of Generative AI." Tech Policy Press, August 29. https://www.techpolicy.press/challenging-the-myths-of-generative-ai/.
Schelenz, Laura. 2022. "Artificial Intelligence Between Oppression and Resistance: Black Feminist Perspectives on Emerging Technologies." In Artificial Intelligence and Its Discontents: Critiques from the Social Sciences and Humanities, edited by Ariane Hanemaayer. Palgrave Macmillan.
US Department of Labor, Bureau of Labor Statistics. 2023. "Employed Persons by Detailed Occupation, Sex, Race, and Hispanic or Latino Ethnicity." Labor Force Statistics from the Current Population Survey. https://www.bls.gov/cps/data/aa2023/cpsaat11.htm.
US Equal Employment Opportunity Commission. 2024. "High Tech, Low Inclusion: Diversity in High Tech Workforce and Sector, 2014–2022." https://www.eeoc.gov/sites/default/files/2024-09/20240910_Diversity%20in%20the%20High%20Tech%20Workforce%20and%20Sector%202014-2022.pdf.
Wendling, Amy E. 2009. Karl Marx on Technology and Alienation. Palgrave Macmillan.
Wheatley, Amanda, and Sandy Hervieux. 2022. "Separating Artificial Intelligence from Science Fiction: Creating an Academic Library Workshop Series on AI Literacy." In The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries, edited by Sandy Hervieux and Amanda Wheatley. Association of College and Research Libraries.
Wilken, Binnie Tate. 2006. African American Librarians in the Far West: Pioneers and Trailblazers. Scarecrow Press.

Share