In lieu of an abstract, here is a brief excerpt of the content:

· 151 ·· CHAPTER 4 · The Time of Technical Systems By the end of the previous chapter, we wanted to reunite the difference between two orders of magnitude through considering them in a state of constant materialization—passing from relations to milieux and thence to systems. We now want to seek how the gap between atomic composition and phenomenal appearance can be further bridged, as opposed to their continuing to be understood as two separable realities. This also gives us the opportunity to develop the concept of relations further . By formulating both discursive and existential relations, we want to understand the dynamic within technological developments. I must make it clear that I am not criticizing a parallel reading of Heidegger and Uexküll in terms of embodiment as wrong; rather, I want to point to another direction of inquiry, which is more concerned with technological progress than the experience of embodiment or embodied reason. The development of the theory of embodiment disturbed the dominant research paradigms in AI in the 1970s. At that time, the philosopher Hubert Dreyfus, in his book What Computers Can’t Do (1972), launched a fierce attack on the neglect of embodiment in AI research, which he later renewed in a modified version in What Computers Still Can’t Do (1992). At the heart of his critique was that such research took a Cartesian approach toward perception and action; in contrast, Dreyfus proposed what is now widely known as Heideggerian AI, which takes embodiment as the foundation of action. Dreyfus’s critique has influenced a generation of AI researchers, including Terry Winograd, Phil Agre, and others. To understand Dreyfus’s critique and its relevance to our investigation , I briefly introduce the frame problem. In the early days of AI, Marvin Minsky, along with others, such as Herbert Simon and John McCarthy, envisaged that if we can represent the world in logical statements, all of these statements should be inferable, and the computer should be able to attain the level of human intelligence, at least in terms of common sense if not higher-level thought.1 But Dreyfus pointed out that even having 152 THE TIME OF TECHNICAL SYSTEMS millions of representations of objects and things in the world would not be enough to solve a commonsense knowledge problem. There are two reasons for this. First, it is difficult to imagine that we can include every possible context, and second, the computer is not able to construct contexts from millions of representations.2 The second reason is more important , because Minsky proposed a microworld view in which it would be possible to attain intelligence by limiting the domain to a set of questions. This assumption is problematic, because it misses what Heidegger calls the Vorstruktur, or hermeneutics of understanding: the microworld presupposes the being-in-the-world as a whole. The knowledge base will keep on increasing without fulfilling the essence of intelligence. Dreyfus thus claimed that the context problem would regress endlessly, which means that this Cartesian approach to AI is totally wrong: To pick out two dots in a picture as eyes one must have already recognized the context as a face. To recognize this context as a face one must have distinguished its relevant features such as shape and hair from the shadows and highlights, and these, in turn, can be picked out as relevant only in a broader context, for example, a domestic situation in which the program can expect to find faces. This context too will have to be recognized by its relevant features, as social rather than, say, meteorological, so that the program selects as significant the people rather than the clouds. But if each context can be recognized only in terms of features selected as relevant and interpreted in terms of a broader context, the AI worker is faced with a regress of context.3 Dreyfus’s critique of the philosophical foundation of AI is very clear and convincing. Conversely, we also have reason to see the semantic web or ontologies-driven approach as being quite similar to the good old-fashioned AI (GOFAI) dating back to the 1950s. As the computer scientist Yorick Wilks pointed out, some have taken the initial presentation (2001) of the SW by Berners-Lee, Hendler and Lassila to be a restatement of the GOFAI agenda in new and fashionable WWW terms. . . . This kind of planning behaviour was at the heart of GOFAI, and there has been a direct transition (quite outside the discussion of the SW...


Additional Information

Related ISBN
MARC Record
Launched on MUSE
Open Access
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.