The Human and Machine1OpenAI, ChatGPT, Quillbot, Grammarly, Google, Google Docs, & humans*

Abstract

With the release of generative text and image-based tools like Midjourney and ChatGPT in 2022, discussions about artificial intelligence (AI) and its impact on design, design education, and research have moved from the periphery to the forefront. These powerful tools, often open-access beta versions, have transformed speculative dialogue into a present reality. Their sophisticated and intuitive user interfaces facilitate the speedy and proficient generation of text, and image-based content, enabling designers, educators, and learners to simultaneously discover the dangers and possibilities of generative AI technologies. To explore the unique powers of both generative AI and human cognition, the author uses autoethnography, AI writing assistants, and generative AI technology to develop a story of practice. The narrative is informed by, and ultimately supports the scholarly literature that emphasizes the need for humans to take responsibility for the equitable and ethical use of AI. This includes initiating and guiding AI systems, critically evaluating their responses, and reformulating, editing, and verifying outputs to address factual inaccuracies, misleading information, or offensive and biased content.

Keywords

Artificial Intelligence, Meta Intelligence, Autoethnography, Practice-based Research, Uncertainty

[End Page 39]

Introduction

The COVID-19 pandemic has had a profound and lasting impact on nearly all aspects of our lives. Thanks to applications such as Zoom, it is possible to connect with people from different towns, cities, and countries. As a lecturer in the design department at Monash University, I have spent countless hours experimenting with these virtual communication tools and learning in real time how they enhance or diminish messages and experiences. Pedagogical experience during the pandemic has reinforced my undestanding that the quality of interaction is entangled with several unpredictable and dynamic variables. In the context of a pandemic, this meant participants navigated learning from different time zones and locations, whilst dealing with unavoidable home life interruptions, isolation, and unstable internet connections. Being part of this unusual collective experience reinforced the need for humans to make contextually relevant decisions in response to unique individual circumstances. Using virtual video conferencing platforms daily across personal and professional contexts equipped us with a deep, embodied understanding of the technology and its affordances in specific contexts.

Midway through 2022, it felt like we were beginning to emerge from the disruptions caused by the Covid-19 pandemic, only to enter a new period of change with the rise of generative AI technology. In August of that year, I was invited to join a small group of academics to discuss the impacts of AI on the sector and subsequently devise a position. My desire to participate in whatever capacity I could stemmed from the understanding that there’s a distinction between knowing that (theory) and knowing how (practice). Understandably, I was apprehensive, as my experience in this field was limited to watching AlphaGo, a documentary that follows Google’s DeepMind AI program as it competes against Lee Sedol the world’s top-ranked Go player. (Go is a strategic and complex two-player board game from ancient China, where players aim to control more territory than their opponent). I agreed to participate, knowing that uncertainty can be a generative force for imagination, experience, and action (Akama et al., 2018, 46). Uncertainty in this context brings with it new possibilities – it does not close down what might happen into predictive untruths, but rather opens up pathways of what might be next and enables us to creatively and imaginatively inhabit new worlds. (Akama et al., 2018, 3). Curiosity combined with the desire to comprehend or keep up with the other academics led me to follow related posts on the Phd-Design listserv, watch supporting YouTube videos, and read any articles related to AI published on The Guardian. As a way to better understand AI’s implications on authorship, content creation, bias, misinformation, and human attention, I began experimenting with different levels of AI to prepare this submission.

By utilizing generative AI technologies, I aim to discover ways to incorporate them without it constituting cheating. I’ve [End Page 40] been frustrated by the media’s intense focus on plagiarism in education, and seek to sense-check the limits of the system in which I teach and learn. My encounters with generative AI technology are blatantly superficial, but by incorporating my personal, emotional, and professional observations, I hope to situate these interactions in a larger context, amplifying what it means to be a human learner. Underpinning this exploration is my mindset, which is curious, prioritizes relationships, values multiple perspectives from experts and those with lived experience, learns through doing, and seeks to practice in ways that are mutually beneficial.

Using autoethnography, I engage in a process of becoming with the technologies and, because of this, show us ways of embodying change. The narrative weaves together the mutually influential relationship between personal experience, the happenings of culture, and emerging AI technology. It is a tracing of practice, a way of thinking with and through—asking questions about and acting on—these experiences and happenings. In the context of communication futures, theory is not a static body of knowledge or an autonomous set of ideas, objects, or practices. Instead, theorizing is an ongoing, fluid process that links the concrete and the abstract, thinking and acting, aesthetics and criticism. (Holman Jones 2016, 229)

This single-authored piece provides an example of academic writing developed with the unwavering technological assistance and generative AI technology. What remains hidden from the reader is the back-and-forth interaction between the author and the machine. These interactions typically involve the author asking the machine to comment on fluency, allowing the author to make micro-adjustments to spelling, grammar, and sentence structure. In the final manuscript, this level of interaction is not visible as it would disrupt the reader’s flow and comprehension, as well as obscure the author’s intent. There are instances, however, where the dialogue is less entangled, allowing the author to italicize their prompts to the machine and highlight the machine’s responses.

The Story of Practice

Although a contribution to this special issue of Visible Language—a written medium—was appealing, the thought of having to piece together a coherent written narrative made me nervous. I’m a visual communicator and design educator who finds writing challenging for reasons beyond the scope of this submission. Rather than starting a fresh document for this article, my approach is to add my thoughts on communication futures to the document I’ve been using for my PhD mid-candidature, as I’m hoping the two tasks will become entwined. However, Grammarly (a browser extension that gives me real time writing suggestions) quickly interrupts, [End Page 41] suggesting, “This document looks a bit long. Do you want Grammarly to check this document, or are you just reading? Yes, check it!” I question my approach but persevere as Grammarly doesn’t understand the context or the logic driving it. (Figure 1. Grammarly Pop-up)

I have already complicated this project before I have even started. I have dedicated time to writing a series of paragraphs on the importance of learning from uncertainty, collaborating, critical thinking, and humanity, as I thought these would be helpful at some point, but I feel like I need to start over when I hear about the free release of OpenAI from my partner, who is a web developer. In the news the next day, I read about the incredible strides being made in AI and the article links me directly to the Chat. OpenAI website. The landing page provides a very succinct introduction, and the user interface is intuitive, allowing me to jump straight into the playground space where I enter the prompt, “Write a paragraph about Communication Futures,” and in a few seconds it answers:

Communication Futures is an exciting area of study that looks at how communication technologies are likely to evolve over time. It looks at the impact of new technologies on communication and the implications for our lives in the years to come. This could include anything from the development of virtual reality and augmented reality to the further integration of voice assistants into our lives. It also looks at the potential for increased personalization, automation, and artificial intelligence in communication and how these might shape the way we interact with each other in the future. Communication Futures is a rapidly evolving field with the potential to fundamentally change the way we communicate with each other.

Its speed leaves me astounded.

Figure 1. Screenshot of Grammarly browser extension pop-up
Click for larger view
View full resolution
Figure 1.

Screenshot of Grammarly browser extension pop-up

[End Page 42]

So I command, “Write a sentence including the words I’m gobsmacked, excited, petrified, crushed” to which it responds,

I’m gob-smacked, excited, petrified, and crushed all at the same time to be starting my first job.

The response amplifies the inadequacies of my prompt, leaving me underwhelmed, and bemused. To reset my thinking, I take a deep breath—but before I can complete typing the sentence, Google suggests I add the word “breath” after “deep.” I take another deep breath and continue on. I already have a terrible habit of not finishing sentences; is technology helping or making my situation worse?

Feeling swept away by initial interactions with ChatGPT, I reel back my thinking and consider the other tools in use. A Google Doc is a stage of exploration, and by design, it’s equipped with autocorrect and predictive text capabilities. These intelligent features have become increasingly commonplace, offering levels of reassurance and complacency, yet discerning users understand the human proofreaders are better able to spot incorrect names, dates, or words with missing letters. For the purposes of methodological exploration, I’ve more recently installed the AI writing assistant Quillbot through Google Workspace Add-ons, and I’ve left the free Grammarly extension enabled, and when I began this journey, I thought that would be the extent of my experimentation. However, with the release of ChatGPT a new level of assistance is available and interactions with it feel different—more natural, conversational, and initially more human. It creates fluid and disorienting interactions, where users can easily forget that they are interacting with something entirely new. Disarming and delighting users, ChatGPT can appear to know everything and nothing in the same sentence. I ask ChatGPT to explain what makes it different to other generative AI writing assistants.

Specifically OpenAI ChatGPT is different to other forms of AI writing assistants in several key ways;

  1. 1. Scale: ChatGPT is one of the largest language models currental available, with 175 billion parameters. This is significantly larger than previous language models such as GPT-2, which had only 1.5 billion parameters.

  2. 2. Unsupervised learning: ChatGPT was trained on a massive corpus of text using unsupervised learning techniques. This means that it was not specifically designed to perform any particular task, but rather learned to generate human-like text by analyzing patterns in the input data.

  3. 3. Generative capabilities: Unlike previous writing assistants which were primarily designed to correct or suggest improvements to existing text, ChatGPT is capable of generating new text on its own. It can generate text in a wide range of styles and formats, from poetry and fiction to technical reports [End Page 43] and news articles.

  4. 4. Natural language processing: ChatGPT is also able to process natural language inputs and respond with natural language outputs, which allows for more fluid and intuitive interactions with the user.

It is with points 3 and 4 which I find myself enamored and anxious about what might unfold. I conjure up memories of being asked to design a publication before having any content and subsequently designing into the document the need for headings, introductions, and pull quotes using placeholder text—with AI text generators I could design a first draft populated with plausible-sounding content. I imagine scenarios, intentional and accidental, where the AI-generated text ends up the message. As a communication designer, I’ve been exposed to countless formulaic vision statements, strategy documents, and marketing messages, and I can’t resist wondering how or if this might shift with the rise of generative AI text. Apologies, I continue being distracted by wonderings...

My intention with this work is to create a rich personal narrative situated in the present moment, which is December 2022. I don’t intend for AI to write this piece for me, but I do want to use it as a tool to engage in dialogue and seek constructive critique and ongoing support. I don’t feel guilty about constantly interrupting the machine with menial and mundane questions, as its capacity to assist seems unlimited given that it’s not constrained by time. I recognize how I act with the machine would challenge the patience of any human. To ask a colleague to spend time reviewing and commenting on my written work when it’s so underdeveloped would be unreasonable. These tangled ideas are perhaps better discussed in a context where I can gauge their enthusiasm for the topic.

There is a politeness to my human-to-human interactions, and I admit this etiquette is missing from my relationship with the machine. I’ve instinctively adjusted my cultural mannerisms—I am direct with the machine and don’t use the words “please” or “thank you.” However, I am reminded of the joy it brought me to see the elderly participants in the short film Smart Homes for Seniors say “please” and “thank you” to smart devices like Google Assistant (Pink 2021). If the machine is learning from me and if I value these forms of etiquette, I ought to begin saying “please” and “thank you”. The machine’s ability to perform multiple tasks at great speed makes it easy to forget there might be a cost to my actions. I am certain there’s a price–but what, to whom, and how that cost is distributed is complex. I wonder who is impacted by my interaction with the machine: those without access to the internet, super users of the technology, those from a minority language group, the staff of OpenAI, or the humans contracted by OpenAI to label data as offensive, etc. I’ve barely touched the surface, but I decide to switch modes.

I copy the first two paragraphs of this piece into the OpenAI ChatGPT window and asked it to comment. It replies, [End Page 44]

The introduction for the submission to Visible Language has a strong focus on the writer’s personal experiences and motivations for writing the submission. It also touches on the writer’s approach to writing and their interest in using AI technologies. Overall, it seems like a unique and engaging way to begin the submission.

This response encourages me to continue, but I question the basis of the praise, so I decide to ask What was the basis for your feedback? Was your comment more general in nature, or did you compare my writing to texts from Visible Language? It responds,

“An error occurred. If this issue persists, please contact us through our help center at help.openai.com.”

Now that I’ve established the basic narrative and tone for this piece, I ask a respected colleague to read through a half-finished draft, and that night I wake up at 2 a.m. feeling embarrassed about the work I shared. At this hour, it is as though I have a birds-eye view of my work, and what I see is the work of an amateur. The next day my colleague diligently and respectfully responds, and to my horror, they’ve spent their precious time marking up in the Google Doc some spelling and grammar errors. They also get in touch via email, encouraging me to continue and show curiosity by suggesting I ask the machine some more questions about ethics and implications. The very next night I wake, at 1:30 a.m. dreaming that my colleague had censured their feedback, omitting to say that my writing jumped erratically from thought to thought, and that rewording the same sentence multiple times does not justify a paragraph. I know my thinking would benefit from more diverse perspectives, yet I’ve refrained from sharing because I want the process to be reciprocal. Sending emails to colleagues during a university closedown period seems unkind. I read online that the design journal Visible Language has coaching and I wish I had been in this position a few months ago. I have no choice but to accept the present circumstances and recognize that if the clock were wound back, this piece would have been different as OpenAI wasn’t publicly released back then.

A few days later, I logged into the ChatGPT interface and noticed a series of prompts I’d forgotten. One in particular catches my eye: “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.” From here forward, I adopt a “Zero Trust” approach to information that is based on the same premise as Zero Trust Networking, which requires each user, device, application, or network element to be authenticated and authorized before they can access resources. Technically, this involves multi-factor authentication and the micro-segmentation of networks to prevent lateral movement.

I ask OpenAI, What are the similarities between Zero Trust Networking and Zero Trust Information, to which it replies: [End Page 45]

The main similarity between Zero Trust Networking and Zero Trust Information is that they both involve the concept of “never trust, always verify. This means that each user, device, application, or network element must be authenticated and authorized before they are allowed access to resources. Both approaches involve multi-factor authentication, the use of encryption, and the segmentation of networks to prevent lateral movement. Additionally, both rely on the use of identity and access management (IAM) solutions to manage user access.”

The first sentence captures the key point but lacks detail, and I am empowered to delete the final two sentences as they are muddled and make the assumption all information is digital. You may have noticed that I incorporated some of OpenAI’s response into my explanation of Zero Trust Networking, and as a non-technical person, I am grateful for some of the terminology. However, each time I reread the technical explanation of Zero Trust networking I feel uneasy, as I have no affinity with this terminology and the specificity leaves me feeling technically inept. Yet I feel differently reading the explanation on the blog Stratechery—here Ben Thompson, a business, technology, and media analyst, describes Zero Trust Information by modifying the traditional castle-and-moat analogy. A zero trust approach to information does not assume that everything inside the moat is trusted; he suggests shifting everything inside the moat, including the castle, to outside the moat (Thompson, 2020). This takes the position that all information needs to be verified before it can be trusted, and this falls within my capabilities.

Technically, what I have done is not “plagiarism,” in the sense that all AI-generated content is unique and not copied directly from a particular source. This means its contribution would likely go undetected by current text-based plagiarism services like Turnitin. Given that there’s presently no clear way to mark AI-generated content, questions surrounding authorship remain pressing. With AI-produced content being indiscernible to the human eye, discussions will continue to emerge on how such content might be encrypted with a form of invisible watermark or cryptography-based solutions adopted (Wiggers, 2022).

Plagiarism is of course a key concern, but so too is the potential spread of misinformation. With AI becoming easier to use and more advanced in the future, there will be a proliferation of information. Larger quantities of information mean the potential for biased worldviews to be perpetuated, or the spread of misinformation, both of which make the roles of verification, editing, and criticality more pressing. However, with the rise and spread of more problematic information, it also becomes probable that valuable new information and understandings will surface. For example, Fan Hui, a European Go player was the first professional Go player, to be defeated 5-0 by AlphaGo. He subsequently spent many months playing AlphaGo and his world ranking soared. Hui’s comments highlighted how [End Page 46] new paradigms might emerge from human-AI collaboration:

“Unlike the way I—and all other human players— approach Go, its decisions are unencumbered by the tradition, theory, and teaching of human play. Instead, it learns the game for itself, giving it the opportunity for fresh thinking and leading to a unique ’free spirited’ style, which in turn has unshackled human players from tradition and allowed us to also think differently about the game.”

The potential for human-AI collaboration to unlock new possibilities and disrupt existing systems is an exciting prospect. However, to fully harness the benefits of this partnership, it is essential to approach this technology with a balanced and nuanced perspective. This includes recognizing both the strengths and limitations of AI, as well as understanding the varied impact it may have on individuals and society as a whole.

I need to be honest—as much as this human– machine relationship empowers me, it disarms me. My need to pause becomes overwhelming. My head feels dense with fleeting and scattered thoughts, and I wonder if I’m unwell. I confess that my very initial experiments, where I’d copy paragraphs I’d written into Quillbot and ask it to paraphrase the text for fluency, probably didn’t help my situation. The process was fleetingly interesting—perhaps akin to experiencing Photoshop filters for the first time. However, I desperately need to get my thoughts in order now that the submission deadline is looming. I need to stop worrying about the various colored underlines that decoratively mark up my text and all the suggestions Quillbot makes in the sidebar. I take decisive action, disabling most of the technological assistance for now. It feels plausible that I’ve developed an attention deficit trait, which is a newly recognized neurological phenomenon, in response to the hyperkinetic environment (Hallowell, 2005, 54–62). This task requires a level of focus I’m not accustomed to, so I pause and do 5 minutes of mindfulness. My behavior is out of the ordinary— I don’t have a regular mindfulness practice despite my desire to, and all the evidence pointing to its benefits. Before getting started again, I listen to a Calming Acoustic playlist available through Spotify. I momentarily enjoy the respite from the onslaught of my thoughts and various digital notifications.

In response to my struggles with not knowing and the stress it causes, I seek order and look to those who claim to know. How are people more familiar with AI approaching human-AI collaboration for writing tasks? Noah Smith and roon, who blog about the future of work in the age of AI, described the sandwich workflow, which is a three-step process. First, a human has a creative impulse and gives the AI a prompt, which then generates a menu of options. The human reviews and evaluates the options, reformulates, decides, edits, verifies, and adds any touches they like. In this scenario, the human is the controlling agent, and the machine is [End Page 47] the understudy. This sounds much simpler than how I have approached collaborating, but this approach only seems applicable now that I surfaced my intent with the practice. I ask the OpenAI playground to propose a series of options, to which it replies:

  1. 1. Automated summarization: Use a machine learning system to analyze and summarize the content of an academic paper into a shorter form. This summary can then be used as the starting point for a human researcher to expand upon.

  2. 2. Automated data analysis: Use a machine learning system to analyze large datasets and identify patterns and correlations. This data can then be used by a human researcher to develop new hypotheses and theories.

  3. 3. Automated question generation: Use a machine learning system to generate questions based on an academic paper’s content. This can then be used by a human researcher to craft further questions and explore the paper’s topic in more depth.

  4. 4. Automated essay writing: Use a machine learning system to generate an initial essay based on an academic paper’s content. This essay can then be used by a human researcher to further refine it and ensure accuracy.

  5. 5. Automated reference generation: Use a machine learning system to generate a list of references based on an academic paper’s content. This list can then be used by a human researcher to quickly locate and find relevant references.

There is a level of specificity to the AI’s suggestions, which is useful, but the machine makes an assumption that humans provide or know the data source.

I reflect on the process I am working through and wonder about the knowns and unknowns of the technology I’m interacting with. I think about what I might learn from it and contemplate what it might learn about me. I’m envious of the machine’s power to systematically perform tasks and crunch through masses of data with speed and vigor, and its ability to digest and store masses of content. Our learning styles seem polar opposites—reading and auditory processing aren’t my strengths. I can’t fathom the thought of reading through masses of literature because I know it wouldn’t be retained or retrievable unless it were contextually relevant to something I was doing. Knowing I am a visual and kinesthetic learner, I continue tinkering with the OpenAI interface, and one thing that strikes me from playing with OpenAI and ChatGPT is the confidence it alludes to in its delivery. The responses sound good, irrespective of whether the content is factually correct. Maybe I can learn something from its unflappable confidence, as I tend to be self-deprecating. When I ask it to offer suggestions, or improve the clarity of one of my paragraphs, it removes that uncertainty [End Page 48] from my words, combining the sentences to: As a visual communicator and design educator, my natural writing style is informed and approachable. I agree that the edits improve fluency, but erasing my admission that I find writing challenging and removing that it is my ambition to cultivate a writing style that is informative and approachable suggest imitating a level of confidence I do not have. I contemplate whether or not I should tone down my insecurities, but I decide that self-awareness of the behavior seems more important than changing it. The messages found in Luckin et al.’s (2022) paper validates my decision to continue exposing my vulnerability. To keep the original message intact, I am quoting it directly here:

“perhaps one of the most important features of human intelligence is that it involves our relationships with ourselves: our meta-intelligence. Humans are capable of learning to plan, monitor, and regulate their own thinking and action (metacognition), our knowledge and control of our own cognitive processes. They are also able to develop a finely tuned awareness of how they feel, as well as how others feel, and how these affect their knowing and learning (meta-emotion). We are also able to develop an awareness of our interactions with the world, including our social interactions and our physical and mental abilities, as we move through different settings, interactions, and experiences (meta contextual awareness). This ability to be self-aware and meta intelligent makes humans capable of accurately judging their own efficacy, something that is not available to any AI”

Humans possess a special level of intelligence that is demonstrated through our awareness of different situations and our capacity to interact with and transition between them. We are so accustomed to this interconnectedness that often it’s ignored, or its significance remains invisible (Luckin et al. 2022, 4).

Current circumstances make it clear that communication futures will almost certainly involve human–machine interaction, and humans will need to put measures in place to counterbalance the machine’s deficits. In our desire to discover, make, and learn with the machine, I share this gentle reminder: Don’t forget to embrace what makes you human—be discerning, and practice with care and curiosity knowing that your actions affect others. Rather than human versus machine, it can be human and machine.

UNESCO also recommends a “humanistic approach” as the overarching principle for AI in education, which includes protecting human rights, equipping people with skills for sustainable development and human–machine collaboration in life, learning, and work, and fostering human values required to develop and apply AI. The key recommendations [End Page 49] of UNESCO emphasize that the use of AI should protect students’ agency and social wellbeing, as well as empower teachers in their work of facilitating knowledge co-creation, human interaction, higher-order thinking, and human values (Carvalho et al. 2022, 2).

One way to help us feel more agency with respect to AI is to adopt a capability building approach where we help each other question and assess the integrity of the role of AI within specific contexts. “AI readiness” recognizes that such contextualization is essential due to the multiple intricacies, sensitivities, and variations between different sectors and their settings, all of which impact the application of AI. To embrace such contextualization, AI readiness needs to be an active, participatory training process that aims to empower people to be more able to leverage AI to meet their needs (Luckin et al. 2022, 1).

Engaging in a reflexive autoethnographic journey has reminded me of who I am and surfaced opportunities for who I might become. This work was not about establishing a transferable framework, but rather sharing a version of the uncertain, the partial, unfinished, and shifting world we are situated in (Holman Jones 2016, 231). This work illustrates how practice-based research and autoethnography might provide an accessible entry point for learners to study the potential benefits and dangers of emerging AI technologies. Accompanying this method with a collaborative design mindset permits individuals to show up as their whole selves. For me, that’s as a designer and design educator, an amateur user of generative AI interested in cultivating a better understanding of the potential future implications on humanity and the field in which I practice. My story of practice emphasizes the importance of context, motivation, curiosity, and criticality, and serves to remind us that humans have rich and varied forms of intelligence such as logic, emotional knowledge, reasoning, creativity, and critical thinking. What seems crucial is that these forms of intelligence need to be practiced and nurtured so that they aren’t lost altogether. In the face of generative AI’s bravado is my ability to flow through states of confidence and self-deprecation—this is a precious form of human intelligence likely to be lost if AI were at the helm. I am tempted to share the finished story of practice with ChatGPT and ask for its opinion, but I don’t. Instead I ask “Can you write an abstract for this piece?” to which it responds:

The message you submitted was too long, please reload the conversation and submit something shorter. [End Page 50]

Wendy Ellerton
Monash University, Department of Design
wendy.ellerton@monash.edu
Wendy Ellerton

Wendy Ellerton is a lecturer in the Department of Design and a PhD candidate at Monash University. Her research is interested in transforming design practice and education using collaborative design methodologies. She has a Masters in Type and Media from the Royal Academy of Art (KABK) in The Netherlands, and has worked with a number of international type foundries. Over the last 15 years she has practiced as a communication designer at Hofstede Design and Studio Round, and as a consultant to Future Days and Maythorpe. In 2014 she founded her own practice Polar Space to house a diverse design practice that spans identity, publication and furniture design, education and research.

Wendy has lectured on type design for the Australian Graphic Design Association and conducted design workshops internationally in Florence, Milan, Beijing and Riyadh. Her teaching practice encourages learners to develop processes and mindsets that prioritize relationships, use participatory means, share power and build capability.

Reference

Akama, Y., Pink, S., & Sumartojo, S. (2018). Uncertainty and possibility: New approaches to future making in design anthropology. London: Bloomsbury Academic.
Carvalho, L., Martinez-Maldonado, R., Tsai, Y-S., Markauskaite, L., & De Laat, M. (2022). How can we design for learning in an AI world? Computers and Education: Artificial Intelligence, 3. https://doi.org/10.1016/j.caeai.2022.100053
Hallowell, E.M. (2005). Overloaded circuits: Why smart people underperform. Harvard Business Review 83(1):54 –62.
Hamera, J., Madison, D.S., & Pollock, D. (2005). The SAGE handbook of performance studies. SAGE Publications.
Holman Jones, S. (2016). Living bodies of thought: The “critical” in critical autoethnography. Qualitative Inquiry, 22(4), 228–237. https://doi.org/10.1177/1077800415622509
Google Arts & Culture. (n.d.). The story of AlphaGo—Google Arts & Culture. https://artsandculture.google.com/story/the-story-of-alphago-barbican-centre/kQXBk0X1qEe5KA?hl=en
Luckin, R., Cukurova, M., Kent, C., & du Boulay, B. (2022). Empowering educators to be AI-ready. Computers and Education: Artificial Intelligence, 3. https://doi.org/10.1016/j.caeai.2022.100076
Pink, S. (Director). (2021). [Film]. Australia: Emerging Technologies Research Lab, Monash University.
Thompson, B. (2020). Zero trust information – Stratechery by Ben Thompson. https://stratechery.com/2020/zero-trust-information
Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education.” Learning, Media and Technology, 45(3): 223–235. https://doi.org/10.1080/17439884.2020.1798995

Footnotes

* Editors, Review #1, Reviewer #2, Supervisors, a trusted colleague, and my anonymous web developer partner.

1. The title puts forward a proposition suggesting a format akin to an artwork label. In this scenario, technologies become treated like mediums and the artist decides what should be included, and what is left out.

Share