This is a preprint
restricted access 20/X: Visuality, Representation and Epistemology in the Age of Intelligent Seeing Machines
In lieu of an abstract, here is a brief excerpt of the content:

VISAP’15© ISAST   doi:10.1162/LEON_a_01413 LEONARDO, Vol. 51, No. 1, pp. 00–00, 2018 71 20/X: VISUALITY, REPRESENTATION AND EPISTEMOLOGY IN THE AGE OF INTELLIGENT SEEING MACHINES Shannon C. McMullen, Department of Art & Design | American Studies Program, Purdue University, West Lafayette, IN 47907, U.S.A. Email: . Fabian Winkler, Department of Art & Design, Purdue University, West Lafayette, IN 47907, U.S.A. Email: . See for supplemental files associated with this issue. Submitted: 27 July 2016 Abstract To understand, critique and shape the impact of machines that can see in ways exceeding human capabilities, humans may need to learn to see like machines, to understand their abstractions and categorizations. The installation 20/X explores visuality, representation and epistemology in the age of intelligent seeing machines. This project is a collaboration between artists and biomedical researchers in an attempt to bring science, technology and the arts together to create an opportunity for public dialogue around an invention that will soon permeate the designed world. Keywords: computer vision, machine vision, curiosity cabinets, new media art installation, visual culture, image databases, neural networks , AI, seeing, looking. Introduction The installation 20/X explores visuality, representation and epistemology in the age of intelligent seeing algorithms embedded in everyday and specialized design objects. Recently, artists and researchers have called attention to computer or machine vision as a site for understanding emergent culturetechnology relations. For example, designer and filmmaker Timo Arnall observes that while from his perspective the technology is in its early stages, “machine vision is becoming a design material alongside metals, plastics and immaterials. It’s something we need to develop understandings and approaches to, as we begin to design, build and shape the senses of our new artificial companions” [1]. While computer vision may be becoming more ubiquitous, it is not always more conspicuous. As experimental geographer and artist Trevor Paglen has insightfully pointed out, computer vision images are largely created “by-machines-for-machines,” great numbers of which are never seen by humans [2]. The invisibility of these images may diminish our ability to recognize their social and cultural significance [3]. In order to perceive, critique and shape the impact of machines seeing with abilities beyond human vision capabilities, humans will need to learn to see like machines, to understand their abstractions and their categorizations of things in the world. What might be at stake is suggested by the work of art historian Horst Bredekamp, who has argued that images do not merely reproduce a prior reality but rather actively create our reality [4]. In the context of images created by computer vision systems, Bredekamp’s work suggests that algorithms looking at the world are not passive observers of their environment but increasingly become active shapers of the reality we do and will experience; one in which, for example, cars drive themselves, houses recognize occupants and smartphones become visually aware. Computer vision merits attention by artists/humanists as both a technological and cultural invention at this moment as a way to promote interdisciplinary knowledge that can create a space for public engagement before it becomes just one more design material. In Fall 2014, we had the opportunity to work with synthetic vision researcher Dr. Eugenio Culurciello and doctoral student Alfredo Canziani in Purdue’s School of Biomedical Engineering . Our collaboration began with an extended tour of Dr. Culurciello’s lab in which we were able to witness seeing algorithms at work and in development. In this one lab, machines use vision algorithms to make sense of the world in a number of ways: counting cars, tracking motion, analyzing traffic patterns and recognizing human faces [5]. While we may have understood the intended purposes for looking, comprehending how an algorithm sees and what exactly it sees in the process of arriving at classification is more difficult—particularly when it is an algorithm able to learn and evolve. Through our conversations with Dr. Culurciello and Canziani , we became interested in three questions: How exactly does an AI vision system see and arrive at its final classification of objects? How might image sets used for training instill unintended social and cultural biases? And what might anomalous (visually interesting but hard to classify) objects tell...