AI and the Material Conditions of Instruction

Abstract

While there is a growing literature on the subject of using generative artificial intelligence tools in college teaching in general (and for information literacy instruction in particular), one important question remains underexplored: What is the point? This article suggests an approach to addressing that question by way of considering how technochauvinism—an intellectual attitude that combines blind optimism about the ability of technology to address social problems with the erasure of the human conditions that make that technology possible—discursively shapes and is reinforced by the material conditions of instruction. This approach reveals that concerns about how artificial intelligence does and does not serve pedagogical purposes are ultimately systemic in nature and cannot simply be addressed by teaching the technology.

Keywords

artificial intelligence, technochauvinism, pedagogy, information literacy

Introduction

This essay begins with a question that has so far not been widely addressed in discussions of the use of machine learning tools and generative artificial intelligence (AI) in higher education: What is the point? Not the point of using the tools themselves—recent literature is rich in examples of reasons to use these tools, as well as reasons to avoid or curtail or carefully manage their use. No, our concern here lies in student anxieties about what AI use has already shown them about the educational process itself. [End Page 574] What is the point of the work college students are expected to do in their classes in a world in which large language models (LLMs) exist?

From the student's point of view, it looks as though most homework could be produced by an appropriately prompted generative AI system, and the system itself appears to provide informational content and feedback on demand in a way that many find indistinguishable in value from what is provided in class. If the machine can do the work, why should the student bother? Why should anyone bother to expend effort on tasks that can be automated so easily? Instructors stand on the other side of the "what is the point?" question. For them, that query takes a different shape, neatly summed up by Rudolph et al.'s (2023) review of the LLM literature for education: Is this convenient machinery a "bullshit spewer," or is it really "the end of traditional assessments in higher education?" A cursory scan of the still-growing body of literature suggests that students fear the meaninglessness of their efforts and appreciate the convenience of AI tools; instructors fear the meaninglessness of their assessments and appreciate the potential of AI tools to create new opportunities for learning.

While most of the broader implications of this question lie far beyond the scope of information literacy and one-shot library instruction, there is a core pedagogical concern that librarians can and should grapple with here: When we ask what the point is, we are ultimately addressing a systemic concern rather than a motivational one. Why do we teach and learn in the way we do in a ChatGPT world? The answer (or so we claim hereafter) lies in resisting the effects of technochauvinism on the material conditions of instruction and, by extension, on learning and assessment. If that is the answer, then it may turn out to be the case that librarians need to rethink their information literacy work to break free of technochauvinist thinking in order to make information literacy instruction meaningful.

What Is the Point? A Story to Start With

Not long ago, one of the authors of this article was asked to provide an instruction session to an upper-level undergraduate course in scientific literacy on the role of AI both in the classroom and within the larger realm of scientific publishing. The session began with a brief overview of how LLMs function on a technical level, followed by some hands-on activities, then ended with a discussion of strengths, weaknesses, and potential moral and ethical dilemmas in various settings. Toward the end of the discussion, a student in the back of the class raised his hand and posed a rapid-fire series of questions that began with "What is the point of all of these assignments if ChatGPT can just do them?," progressed to "What is the point of a college degree if I can learn what I need to from LLM conversations?," and ended with "What is even the point of human interactions if I can carry on meaningful and productive conversations with a chatbot?"

While being asked to explain the purpose of human interactions in a [End Page 575] live class setting was certainly an unexpected turn of events, what was even more surprising was the fact that the student's classmates were collectively nodding along in agreement with his questions. This was not an isolated case of a single student having a public existential crisis. This student was, instead, voicing the serious concerns of the whole group—and he was making very good points. What differentiates conversing with ChatGPT from text messaging with a friend you never see in person (a sort of student-imagined Turing test)? What is the issue with LLMs generating misinformation when humans perpetrate the same errors? Why put oneself thousands of dollars in debt to learn from a person when LLMs "teach" for free?

While it is far beyond the scope of this discussion to answer the student's deeper existential questions, we can probably assert with some confidence that educators have a responsibility to create meaning within pedagogical settings. As such, the more focused question becomes: What is the point of a specific lesson or course? What are meaningful methods of assessing student achievement in that regard? Backward design (the practice of designing instruction using learning outcomes as a point of departure), as popularized by Wiggins and McTighe (2005), is not a novel concept. Nevertheless, while desired learning outcomes vary widely, there is a tendency in higher education to rely extensively upon a fixed set of traditional formats for assessing outcomes. For example, while the ability to synthesize and communicate ideas in a cohesive manner is vital to success in any professional setting, the appropriate communication format is situation-dependent and is virtually never going to be a standard five-point essay. Nonetheless, these types of essays remain a common form of academic assessment (Gibson 2017; Harker 2014) for idea synthesis and delivery, despite numerous contextual changes (e.g., technologies, diverse communication norms, social issues) that have rendered this format at best ineffective and at worst irrelevant.

In either case, the authors of this article echo that Scientific Literacy student in questioning the purpose of spending time on an assignment with little apparent real-life application that could be generated easily by any LLM. In much the same way, the contemporary student in a science, technology, engineering, or mathematics discipline would rightly question the need to perform logarithmic calculations by hand when calculators exist. Calculators can perform mathematical operations quite effectively and quickly. What the student, in this case, really needs to know is why and when these calculations are needed and what the answers mean within a larger context. In other words, to assuage the concerns of both students and educators: Human-designed, -implemented, and -mediated instruction is still vital for a successful educational experience and not something that can be fully outsourced to AI, though AI (like calculators) can play a productive role in the learning experience for both teachers and learners. [End Page 576] That said, the complete redesign of learning objectives and the reimaging of assessments to incorporate emerging technologies requires considerable time and resources from instructors—and this innovation may not always be recognized or incentivized by educational infrastructures.

So, to reframe the problem yet again: What is the point, for students and instructors alike, of an education grounded in practices that are readily circumvented or corrupted by an inappropriate reliance on a technology and that only appear to meet real-world learning needs? How can information literacy instruction done in the context of such practices actually accomplish its goals? In order to address this version of the problem, we first need a clearer picture of how generative AI use is currently situated for both students and instructors in higher education contexts.

AI Use in Higher Education: A Snapshot of the Recent Literature

It is important to recognize that while there is quite a lot of recently published work about AI tools in higher education, the actual evidence concerning the long-term effects of their use is necessarily thin. This is an emerging area of study, and there is much about it that remains understudied and unknown. Part of the challenge here is a tendency to use the phrase "artificial intelligence" to cover a range of different technologies that lend themselves to very different uses; while large language models deployed in chatbot applications and image generators currently occupy the spotlight, machine learning and predictive modeling applications have uses beyond chatbot-style language processing or image creation, alongside parallel developments such as intelligent tutoring systems. This complicates the process of drawing conclusions about how these tools are used and what their effects are. It also shapes how AI usage is discussed at the institutional level (Bearman et al. 2023). For the purpose of our discussion here, we are particularly interested in tools such as ChatGPT, Midjourney, and DALL-E, as well as various AI detection and scoring applications that are commonly used by students and instructors for doing and assessing coursework, as opposed to machine learning applications used primarily for administrative analytic purposes. We will hereafter refer to those systems as "pedagogical AI," with the acknowledgment that this usage is a departure from applications of that label that usually focus on the instructor's experience rather than the student's.

At present, AI discussions in higher education are dominated by three broad types of literature: arguments about the benefits and risks of pedagogical AI usage (Bahroun et al. 2023; Crompton and Burke 2023; Grassini 2023; Kumar et al. 2020; Lin et al. 2024; Montenegro-Rueda et al. 2023; Romero et al. 2024), practical strategies for responding to student use of pedagogical AI tools and examples of how these applications can be used effectively in instructional and scholarly contexts (writing, assessment, [End Page 577] critical thinking, editing, brainstorming, etc.) (Gimpel et al. 2023; Hashmi and Bal 2024; Houston and Corrado 2023; Imran and Almusharraf 2023; Meakin 2024), and continuing discussions of the student-focused use of machine learning in administrative analytic contexts (predicting and monitoring student performance, mental health support, etc.) (Crompton and Burke 2023). There is also a smaller, growing body of work on student perceptions and outcomes relative to pedagogical AI use (Darvishi et al. 2024; Markos et al. 2024; Sila et al. 2023; Stojanov et al. 2024; Wu and Yu 2024; Zhang 2024; Zhang et al. 2024).

Of particular interest here is the way in which the pro/con arguments, strategy recommendations, and student perception/learning outcome studies intersect to reveal complementary concerns among educators and the students they teach. Most of the recent studies of the benefits and risks of AI use selected for this essay, for example, tend to pick out the same basic set of issues, a selection of the most common of which appears in table 1.

Practical strategies for dealing with AI (either for responding when students use it or for making use of it in instruction or scholarship) tend on the whole to try to leverage beneficial uses in service to handling the risks by offering workarounds, best practices, advice, and creative ideas for new pedagogies (see AlAfnan et al. 2023, e.g.). Both the pro/con arguments and the assorted collections of practical advice frequently take it as given that AI in education is here to stay and that it is better to flow with the tide of technological disruption than to continue to resist it. This attitude is most clearly visible in the US Department of Education's policy report Artificial Intelligence and the Future of Teaching and Learning (Office of Educational Technology 2023), which boils down the trends observed in the literature into a set of recommendations meant to mitigate and manage AI disruptions and treats the use of pedagogical AI solutions as an inevitability requiring a policy response.

Discussions of machine learning tools, large language models, and so on in the context of academic librarianship are broadly similar to the studies described above, filtered through the specific needs of library services, reference services, and information literacy instruction (Aithal and Aithal 2023; Formanek 2024). AI tools present many of the same challenges for information literacy instruction as they do for other subject areas, but for those teaching information literacy and/or digital literacy, the most important recommendations for how to deal with them have focused on tool adoption and skill development, both for the librarians themselves and for the students they teach (Boehme et al., n.d.; Formanek 2024; Houston and Corrado 2023; Madunić and Sovulj 2024; Scott-Branch et al. 2023). While librarians and other instructors share an interest in preventing academic dishonesty, source evaluation and dealing with misinformation loom larger in the information literacy space. [End Page 578]

Table 1. Potential benefits and risks in using generative artificial intelligence (AI) for instruction.
Click for larger view
View full resolution
Table 1.

Potential benefits and risks in using generative artificial intelligence (AI) for instruction.

Interestingly, studies of student perceptions and learning outcomes note both that a better understanding of how AI tools like ChatGPT actually work may lead to more effective and ethical use (Holland and Ciachir 2024; Stojanov et al. 2024; Zhang 2024) and that this use could be the occasion of considerable student anxiety. Positive student attitudes toward use of a tool like ChatGPT in the research conducted so far seem to match the list of advantages typically described in pro/con arguments aimed at instructors, along with the additional benefit of immediate responsiveness. From the student's point of view, interactions with the machine learning system provide accessible help, almost instantaneous feedback, and ease of use for getting work done relatively quickly (Holland and Ciachir 2024; Sila et al. 2023; Zhang 2024). Student anxieties, however, appear to develop from problems with self-efficacy and the way in which chatbot-style AI tools in particular support a kind of technological codependency, at the far end of which is the alienation of students from the social world of instruction and the loss of student agency in the learning process (Darvishi et al. 2024; Duong et al. 2024; Holland and Ciachir 2024; Hughes 2021; Stojanov et al. 2024; Zhang et al. 2024). The evidence with regard to learning outcomes so far is inconclusive, although students themselves appear to believe that certain uses of AI applications (such as brainstorming essays in ChatGPT to get the writing process started) help them learn more effectively. [End Page 579]

Technochauvinism and the Educational Experience

In order to understand the further implications of the "what is the point" question for information literacy instruction in higher education in light of the literature discussed above, it is helpful to turn to another way to talk about the problem: the allure of technochauvinism and the way in which it complicates the material conditions of instruction. Doing so reveals that our existentially troubled student is probably more accurately understood to be raising questions about systems rather than motivations.

According to Habgood-Coote's (2023) deployment of Broussard (2018) in his recent work on deepfakes and epistemology, technochauvinism is

an intellectual attitude, involving three tendencies: to repackage social problems as technological problems (techno-solutionism), to believe that technological systems can perform complex tasks (techno-optimism), and to ignore or underplay the importance of the designers, operators, and maintainers of technological systems (techno-fixation). … These three intellectual dispositions are not intrinsically bad; they are bad because they have bad consequences for peoples' beliefs about political problems, what technological systems can do, and the role of people within technological systems.

We see techno-solutionism at work in a number of ways in the deployment of pedagogical AI in higher education, ranging from the aggressive promotion and adoption of tools for streamlining grading or administrative tasks to automated plagiarism or AI-use detectors. The use of plagiarism and AI-use detectors upon assignment submission exemplifies the techno-chauvinist convention of interpreting social issues as technological issues. In this case, applications or software are designed to treat cheating as a technological problem, rather than as the deep-seated social issue of a lack of academic integrity in student work. As such, institutions often rely upon a technology designed to catch students in the act of cheating—and ultimately punish them—instead of taking preemptive measures to address the root causes of a student's actions.

To get at those root causes, we might ask: Did the student realize their work constituted cheating? Was the work actually cheating, or was it a technological false positive? Were there cultural issues or language barriers affecting the student's understanding of academic integrity in this particular setting? What were the student's motivations for cheating? The last question will likely introduce a whole other dimension of social issues, many of which are rooted in the systemic inequities of higher education in the United States (e.g., time constraints due to working multiple jobs to afford education, inadequate training in the subject matter from K–12 institutions or prerequisite courses, lack of adequate accommodations for disabilities, etc.). To further complicate the issue, Miles et al. (2022) note that a lack of familiarity with institutional academic integrity codes and policies is not just limited to students but often extends to teaching faculty [End Page 580] and staff. Thus, in choosing to address cheating as a technological issue, institutions that rely upon technological solutions run the risk of creating a new social issue (or perhaps exacerbating an existing issue), in which students are accountable for their actions but in which faculty are not.

Techno-optimism informs every recommendation of ChatGPT and other machine learning systems as a necessary and inevitable disruptor of instructional and scholarly practice. As Broussard (2018) notes, our expectations for what technology can do are frequently inaccurate (see also Williamson's [2024] further discussion of this problem in educational contexts). We impute powers and future developments to existing AI systems that no evidence really supports. The result is a tendency to board the AI hype train and fail to look closely enough at its serious limitations (Bender et al. 2021; Williamson 2024). Techno-fixation lives in the invisibility of the human beings behind the tools and the erasure of responsibility for and transparency about LLM datasets (copyright issues, privacy issues, labor issues). While extant discussions of the pros and cons of AI use in higher education among faculty and administrators sometimes do attempt to overcome techno-fixation (it is certainly an important subject of ethical discussion at least some of the time in the sources reviewed above), the pedagogical AI space is nonetheless largely dominated by techno-solutionism and techno-optimism (see the US Department of Education Office of Educational Technology's recommendations from 2023, mentioned earlier).

If we take seriously the small set of student-focused studies mentioned earlier, student technochauvinism appears to be a slightly different phenomenon, often involving a techno-solutionist attitude shaped by a naive techno-optimism that is decidedly techno-fixated. One of the more troubling examples of this attitude is the student use of tools like ChatGPT to format references; the student, unaware of how the system actually works and either ignorant of or uncomfortable with the mechanics of citation, confidently expects the LLM to generate a correctly formatted citation and does not question the details of the system's output, even when that output is plainly incorrect to a writer with more experience. A related phenomenon is the failure of student users to interact critically with output that appears to provide new information, as when students mistakenly treat ChatGPT or Gemini as a search tool without possessing the prior knowledge necessary to assess the quality of the results the system returns (Bernhardt 2024); they trust the technology to do the job it appears to be doing correctly and do not have either the ability to recognize its failures or any reason to believe such failures are occurring. In both instances, students appear to operate from an unwarranted faith in technology coupled with the habit of expecting technology to do certain kinds of work for them.

As the examples here suggest, technochauvinism is ill-suited by nature [End Page 581] to tell us why we should bother with education in its current form when ChatGPT exists. Indeed, it seems to be the case that the student meltdown described above happened as a consequence of technochauvinist thinking running up against the social systems and practices that techno-fixation and techno-solutionism otherwise erase or hide from view—in this case, the social systems, practices, and institutions of higher education itself, or what Williamson characterizes as "the social life of AI in education" (2024, 99). To put it another way: the student's technochauvinism accidentally forced him to confront the whole complex system of the material conditions of instruction.

Grappling with the Material Conditions of Instruction

In this context, we take "material conditions" to refer to the intersection of technology, economics, politics, and culture, in conjunction with the ways in which the forces at that intersection shape social reality. In the specific case of higher education in the United States, the material conditions of instruction include how institutions are structured and run, how bodies of pedagogical practice and knowledge production currently work, the social and economic conditions of the faculty and the students, and, yes, the technology being used. For example, an academic librarian serving science, technology, engineering, and mathematics subject areas as a liaison and instructor at a master's-level university is not perfectly free to teach anything at all where information literacy is concerned. That librarian's instructional choices are constrained by the library and the university's resources, by the requirements and subject features of the academic disciplines served, by the demographic properties of the student body, by the services available to students and instructors, by the academic calendar and other time limitations, by the standards of the profession, and so on. These conditions, importantly for our point here, also include how teaching tends to work in actual practice and how it is incentivized and managed by the institution. These conditions also include how librarians and university instructors are constrained relative to their expected work in order to produce certain kinds of outcomes (Willenborg and Detmering's [2025] discussion of the conditions of library instructors working against misinformation is a useful example of how these constraints sometimes work; see also Ovetz's [2017] analysis of academic labor).

The educational system currently in place is one that—through adjunctification and an increased reliance on instructional tech—is slowly shaping itself away from how the professions used to generate, critique, and disseminate knowledge and toward pragmatic tools for the disciplining of labor (Hughes 2021; Ovetz 2017; Williamson 2024). Broadly speaking, recent trends in higher education appear to be moving toward what Hughes (2021) calls the "deskilling" of teaching, in favor of comparable [End Page 582] results allegedly gained from new instructional technologies that decenter instructional authority. For library instruction and information literacy purposes, we are left with a set of conditions in which information literacy instruction seems to have turned back to narrow skill development (tool use, identifying misinformation, and so on) in service to the content delivery process, geared toward the goal of training users for the appropriate consumption of content.

Technochauvinistic attitudes are not actually distinct from these conditions; rather, they are a part of shaping the systems and practices of which said conditions are composed. Administrators and faculty who must deliver on the promises of higher education relative to student employ-ability cannot risk falling behind the technological developments driving business, which readily incentivizes technochauvinism. As Bearman et al.'s (2023) critical review of the AI literature in higher education suggests, these attitudes are shaping and shaped by what we characterize above as a fundamentally techno-solutionist and techno-optimistic discourse. That discourse does not just shape the institutions and practices of higher education—it constructs the roles of instructors and students in ways that constrain expectations, choices, and behaviors. Because of the urgency of responding to technological change (or so current institutional thinking goes), students must acquire a specific set of necessary skills; "without the necessary skills, students will cede agency and authority over what and how they learn" (Bearman et al. 2023, 379). This apparent necessity also shapes how and what students are taught in ways that have less to do with information content and more to do with a wide variety of other factors, such as employability in a workforce also conditioned on new technologies.

Instructors constrained by material conditions shaped in part by technochauvinist discourses in higher education are therefore put in the difficult position of having to adapt their content knowledge and considerable experience with sharing that knowledge to a wildly different domain of pedagogical expertise. This may in turn drive at least some of those instructors into the same corner their students already occupy: techno-solutionism shaped by naive (or uninformed or inexpert) techno-optimism, either on their own part or on the part of the administrators who manage their employment. Faculty concerns about assessment in an AI-mediated educational experience may then become decoupled from learning, as the technology itself becomes the fulcrum on which the student experience of education turns (for information, for discipline, for institutional income, for workforce preparation, and so on).

The ultimate effect of these constraints is something like our student's meltdown. It is also the instructor's conundrum: How, with limited time and resources, is it possible to break out of the technochauvinistic bind in such a way as to create a meaningful, measurably successful educational experience that does not simply reduce instruction to tool training or [End Page 583] abandon the student to effectively becoming a chatbot-playing autodidact doomed to being misinformed?

While it is tempting at this point in the discussion to return to the literature on best practices for AI-supplemented pedagogy for information literacy, it is not productive to do so without first assessing that literature in light of its participation in technochauvinist discourses in the academy. Put more simply: before we can make a serious effort to teach information literacy on ChatGPT's turf, we should probably take the time to decide whether we want to cede the home field advantage to the AI without questioning the conditions under which we do so. This requires more than teaching the tools (even when we teach them critically); it requires addressing the material conditions of instruction and the processes of knowledge production that shape and are shaped by the technology in question.

Some Notes on an Anti-technochauvinist Future

What does it look like, in the end, to address the material conditions of instruction as they are shaped by technochauvinism in library instruction? That is, how can library instruction begin to be anti-technochauvinist? We suggest a few important pieces of the process:

  • ● Know your audience. Start to review your assignments and activities through the eyes of a techno-solutionist. What are you actually prompting, and how would a student use AI to do that work? What would have to change to circumvent this response, and why? How might you draw students' attention to their own orientation toward tech solutions and make them approach it critically?

  • ● Undermine the most problematic aspects of students' techno-optimism by encouraging them to be critical of AI output. The most important question to ask them: How do you know that the output you are getting is accurate, and how would you check? Try something like the Google Pollution Exercise, for example (Bernhardt 2024), which illustrates in real time the shortcomings of pedagogical AI for research purposes.

  • ● Understand your intended outcomes relative to that techno-solutionist audience and return to the lessons learned from backward design. What is this assignment supposed to accomplish? What is the point of the work assigned, relative to what students will actually need to do going forward?

  • ● Lean into the "Framework for Information Literacy," taking Scholarship as Conversation as the hook on which the rest of the framework hangs. The goal is to engage students and faculty alike in a conscious and deliberate understanding of how scholarship is actually produced and how we work with it. This shifts attention away from the tech and toward the social systems in which knowledge is actually produced.

  • ● Select teaching options and assignments that grant students more agency [End Page 584] over the work they do. The Open Pedagogy movement is a good place to start exploring methods for empowering students to actively participate in, and even take control of, their learning environment. See, for example, Robin DeRosa's open textbook project, Interdisciplinary Studies: A Connected Learning Approach (2016), in which faculty and students worked together to produce their own textbook.

Ultimately, the motto of anti-technochauvinistic library instruction comes from Meredith Broussard herself: "If we give up the idea that it is possible to create a machine that does all the work for us, we can design systems in which the machine does a lot of the work but meaningful human work and meaningful human interaction are prioritized" (2019). If we stick to that motto, it becomes easier to see that the point of education itself has not changed, only our attitude toward how best to reach it.

Laura M. Bernhardt

Laura M. Bernhardt (PhD, MLIS) is a research and instruction librarian in the David L. Rice Library at the University of Southern Indiana and a former philosophy professor. Her research focuses on information literacy, ethics, and aesthetics (especially the philosophy of music and the philosophy of popular culture).

Becca Neel

Becca Neel (MLS) is the digital library administrator for the Levy Library at the Icahn School of Medicine at Mt. Sinai. She is also pursuing a doctoral degree in education, focusing on high school-to-college information literacy transitions.

References

Aithal, Shubhrajyotsna, and P. S. Aithal. 2023. "Effects of AI-Based ChatGPT on Higher Education Libraries." International Journal of Management, Technology, and Social Sciences 8 (2): 95–108. https://doi.org/10.2139/ssrn.4453581.
AlAfnan, Mohammad Awad, Samira Dishari, Marina Jovic, and Koba Lomidze. 2023. "ChatGPT as an Educational Tool: Opportunities, Challenges, and Recommendations for Communication, Business Writing, and Composition Courses." Journal of Artificial Intelligence and Technology 3 (2): 60–68. https://doi.org/10.37965/jait.2023.0184.
Bahroun, Zied, Chiraz Anane, Vian Ahmed, and Andrew Zacca. 2023. "Transforming Education: A Comprehensive Review of Generative Artificial Intelligence in Educational Settings Through Bibliometric and Content Analysis." Sustainability 15 (17): 12983. https://doi.org/10.3390/su151712983.
Bearman, Margaret, Juliana Ryan, and Rola Ajjawi. 2023. "Discourses of Artificial Intelligence in Higher Education: A Critical Literature Review." Higher Education 86 (2): 369–85. https://doi.org/10.1007/s10734-022-00937-2.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" In FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922.
Bernhardt, Laura M. 2024. "Digital Literacy as an Environmental Ethics of Information: A Case Study for the Age of Large Language Models." New Directions for Teaching & Learning 2024 (180). https://doi.org/10.1002/tl.20609.
Boehme, Ginny, Stefanie Hilles, Katie Gibson, and Roger Justus. n.d. "Harnessing Pandora's Box: At the Intersection of Information Literacy and AI." ACRL Framework for Information Literacy Sandbox. Accessed February 29, 2024. https://sandbox.acrl.org/library-collection/harnessing-pandoras-box-intersection-information-literacy-and-ai.
Broussard, Meredith. 2018. Artificial Unintelligence: How Computers Misunderstand the World. MIT Press.
Broussard, Meredith. 2019. "Letting Go of Technochauvinism." Public Books (blog), June 17. https://www.publicbooks.org/letting-go-of-technochauvinism/.
Crompton, Helen, and Diane Burke. 2023. "Artificial Intelligence in Higher Education: The State of the Field." International Journal of Educational Technology in Higher Education 20 (1): 22. https://doi.org/10.1186/s41239-023-00392-8.
Darvishi, Ali, Hassan Khosravi, Shazia Sadiq, Dragan Gašević, and George Siemens. 2024. "Impact of AI Assistance on Student Agency." Computers & Education 210:104967. https://doi.org/10.1016/j.compedu.2023.104967.
DeRosa, Robin. 2016. Interdisciplinary Studies: A Connected Learning Approach. Rebus Community. https://press.rebus.community/idsconnect/.
Duong, Cong Doanh, Trong Nghia Vu, Thi Viet Nga Ngo, Ngoc Diep Do, and Nhat Minh Tran. 2024. "Reduced Student Life Satisfaction and Academic Performance: Unraveling the Dark Side of ChatGPT in the Higher Education Context." International Journal of Human-Computer Interaction, online, 1–16. https://doi.org/10.1080/10447318.2024.2356361.
Formanek, Matus. 2024. "Exploring the Potential of Large Language Models and Generative Artificial Intelligence (GPT): Applications in Library and Information Science." Journal of Librarianship and Information Science. https://doi.org/10.1177/09610006241241066.
Gibson, Jonathan. 2017. "Beyond the Essay? Assessment and English Literature." In Teaching Literature, edited by Ben Knights. Palgrave Macmillan UK. https://doi.org/10.1057/978-1-137-31110-8_7.
Gimpel, Henner, Kristina Hall, Stefan Decker, et al. 2023. "Unlocking the Power of Generative AI Models and Systems Such as GPT-4 and ChatGPT for Higher Education: A Guide for Students and Lecturers." Working Paper 02-2023. Hohenheim Discussion Papers in Business, Economics and Social Sciences. Universität Hohenheim, Fakultät Wirtschafts- und Sozialwissenschaften. https://www.econstor.eu/handle/10419/270970.
Grassini, Simone. 2023. "Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings." Education Sciences 13 (7): 692. https://doi.org/10.3390/educsci13070692.
Habgood-Coote, Joshua. 2023. "Deepfakes and the Epistemic Apocalypse." Synthese 201 (3): 103. https://doi.org/10.1007/s11229-023-04097-3.
Harker, Michael. 2014. The Lure of Literacy: A Critical Reception of the Compulsory Composition Debate. State University of New York Press.
Hashmi, Nada, and Anjali S. Bal. 2024. "Generative AI in Higher Education and Beyond." Business Horizons 67 (5): 607–14. https://doi.org/10.1016/j.bushor.2024.05.005.
Holland, Anna, and Constantin Ciachir. 2024. "A Qualitative Study of Students' Lived Experience and Perceptions of Using ChatGPT: Immediacy, Equity and Integrity." Interactive Learning Environments, online, 1–12. https://doi.org/10.1080/10494820.2024.2350655.
Houston, Aileen B., and Edward M. Corrado. 2023. "Embracing ChatGPT: Implications of Emergent Language Models for Academia and Libraries." Technical Services Quarterly 40 (2): 76–91. https://doi.org/10.1080/07317131.2023.2187110.
Hughes, James. 2021. "The Deskilling of Teaching and the Case for Intelligent Tutoring Systems." Journal of Ethics and Emerging Technologies 31 (2): 1–16. https://doi.org/10.55613/jeet.v31i2.90.
Imran, Muhammad, and Norah Almusharraf. 2023. "Analyzing the Role of ChatGPT as a Writing Assistant at Higher Education Level: A Systematic Review of the Literature." Contemporary Educational Technology 15 (4): ep464. https://doi.org/10.30935/cedtech/13605.
Kumar, Rahul, Sarah Elaine Eaton, Michael Mindzak, and Ryan Morrison. 2020. "Academic Integrity and Artificial Intelligence: An Overview." In Handbook of Academic Integrity, edited by Sarah Elaine Eaton. Springer Nature. https://doi.org/10.1007/978-981-287-079-7_153-1.
Lin, Xi, Roy Chan, and Shyam Sharma. 2024. "The Impact of Artificial Intelligence (AI) on Global Higher Education: Opportunities and Challenges of Using ChatGPT and Generative AI." In ChatGPT and Global Higher Education: Using Artificial Intelligence in Teaching and Learning, edited by Xi Lin, Roy Y. Chan, Shyam Sharma, and Krishna Bista. STAR Scholars Press. https://doi.org/10.32674/rh27qv16.
Madunić, Jelena, and Matija Sovulj. 2024. "Application of ChatGPT in Information Literacy Instructional Design." Publications 12 (2): 11. https://doi.org/10.3390/publications12020011.
Markos, Angelos, Jim Prentzas, and Maretta Sidiropoulou. 2024. "Pre-service Teachers' Assessment of ChatGPT's Utility in Higher Education: SWOT and Content Analysis." Electronics 13 (10): 1985. https://doi.org/10.3390/electronics13101985.
Meakin, Lynsey A. 2024. "Embracing Generative AI in the Classroom Whilst Being Mindful of Academic Integrity." In Academic Integrity in the Age of Artificial Intelligence, edited by Saadia Mahmud. IGI Global. https://doi.org/10.4018/979-8-3693-0240-8.ch004.
Miles, Paula J., Martin Campbell, and Graeme D. Ruxton. 2022. "Why Students Cheat and How Understanding This Can Help Reduce the Frequency of Academic Misconduct in Higher Education: A Literature Review." Journal of Undergraduate Neuroscience Education 20 (2): A150–60. https://www.funjournal.org/volume-20-issue-2-winter-2022/miles-et-al-june-202a150-a160/.
Montenegro-Rueda, Marta, José Fernández-Cerero, José María Fernández-Batanero, and Eloy López-Meneses. 2023. "Impact of the Implementation of ChatGPT in Education: A Systematic Review." Computers 12 (8): 153. https://doi.org/10.3390/computers12080153.
Office of Educational Technology. 2023. Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. US Department of Education. https://www.govinfo.gov/app/details/GOVPUB-ED-PURL-gpo229415#.
Ovetz, Robert. 2017. "Click to Save and Return to Course: Online Education, Adjunctification, and the Disciplining of Academic Labour." Work Organisation, Labour and Globalisation 11 (1): 48–70. https://doi.org/10.13169/workorgalaboglob.11.1.0048.
Romero, Margarida, Jonathan Reyes, and Panos Kostakos. 2024. "Generative Artificial Intelligence in Higher Education." In Creative Applications of Artificial Intelligence in Education, edited by Alex Urmeneta and Margarida Romero. Springer Nature. https://doi.org/10.1007/978-3-031-55272-4_10.
Rudolph, Jürgen, Sampson Tan, and Shannon Tan. 2023. "ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education?" Journal of Applied Learning & Teaching 6 (1). https://doi.org/10.37074/jalt.2023.6.1.9.
Scott-Branch, Jamillah, Robert Laws, and Paschalia Terzi. 2023. "The Intersection of AI, Information and Digital Literacy: Harnessing ChatGPT and Other Generative Tools to Enhance Teaching and Learning." Paper presented at the 88th International Federation of Library Associations and Institutions General Conference and Assembly, Rotterdam. https://repository.ifla.org/handle/20.500.14598/2788.
Sila, Carolyna Anak, Christopher William, Melor Md Yunus, and Karmila Rafiqah M. Rafiq. 2023. "Exploring Students' Perception of Using ChatGPT in Higher Education." International Journal of Academic Research in Business and Social Sciences 13 (12): 4044–54. https://doi.org/10.6007/IJARBSS/v13-i12/20250.
Stojanov, Ana, Qian Liu, and Joyce Hwee Ling Koh. 2024. "University Students' Self-Reported Reliance on ChatGPT for Learning: A Latent Profile Analysis." Computers and Education: Artificial Intelligence 6:100243. https://doi.org/10.1016/j.caeai.2024.100243.
Wiggins, Grant P., and Jay McTighe. 2005. Understanding by Design. Association for Supervision and Curriculum Development.
Willenborg, Amber, and Robert Detmering. 2025. "'I Don't Think Librarians Can Save Us': The Material Conditions of Information Literacy Instruction in the Misinformation Age." College & Research Libraries, ahead of print. https://ir.library.louisville.edu/faculty/949/.
Williamson, Ben. 2024. "The Social Life of AI in Education." International Journal of Artificial Intelligence in Education 34 (1): 97–104. https://doi.org/10.1007/s40593-023-00342-5.
Wu, Rong, and Zhonggen Yu. 2024. "Do AI Chatbots Improve Students Learning Outcomes? Evidence from a Meta-analysis." British Journal of Educational Technology 55 (1): 10–33. https://doi.org/10.1111/bjet.13334.
Zhang, Bo. 2024. "The Influence of ChatGPT on Student Learning Outcomes in Higher Education: A Meta-analysis of the Initial Empirical Literature." In ChatGPT and Global Higher Education: Using Artificial Intelligence in Teaching and Learning, edited by Xi Lin, Roy Y. Chan, Shyam Sharma, and Krishna Bista. STAR Scholars Press. https://doi.org/10.32674/rh27qv16.
Zhang, Shunan, Xiangying Zhao, Tong Zhou, and Jang Hyun Kim. 2024. "Do You Have AI Dependency? The Roles of Academic Self-Efficacy, Academic Stress, and Performance Expectations on Problematic AI Usage Behavior." International Journal of Educational Technology in Higher Education 21 (1): 34. https://doi.org/10.1186/s41239-024-00467-0.

Share