- The Techno-Human Condition
The catalyst for this book is transhumanism, the belief that the future of human evolution lies through technology. To the transhumanist, this means that we should foster the development of human techno-capacities, including cognitive enhancements and life-extension technologies, up to and including uploading human consciousness into a computer. The Techno-Human Condition quickly veers away from its transhumanist origins, however, to develop a general schematic for technological evolution, which the authors argue has been bound up with human evolution since the beginning. This entanglement they call the techno-human condition.
At the heart of the analysis is a multi-level scheme that parses technological developments into three categories. Level I is the “shop floor” (p. 58), where goals are explicitly understood and tools are developed specifically to achieve them: for example, a vaccine to prevent smallpox. But Level I technologies are embedded in Level II complexities, since networks are necessary to support, sustain, and implement them: for example, delivery systems for vaccines may include political negotiations, campaigns to win community support, education about benefits and risks, and so forth. Whereas the tools inherited from the Enlightenment work well at Level I—cause-and-effect reasoning, the scientific method, etc.—at Level II they work less well, because the relations between causes and effects are more complex and may involve a large number of unintended and unforeseen consequences.
At Level III, which the authors call an “Earth system” (p. 63), networks of networks emerge, bringing about a quantum leap in complexity, with the result that systems become unpredictable, making Enlightenment tools of little help. Level III complexities cannot adequately be understood as problems to be solved; rather, they must be recognized as conditions to be managed (hence the techno-human condition). They partake of “wicked [End Page 920] complexity,” “when a system’s makeup and dynamics are dominated by differing human values” and deep uncertainties (p. 109). The overall problem is that arguments (over transhumanism, for example) frequently make category mistakes that lump together Level II and III complexities with assumptions that should be limited to Level I, for instance that individual agency and autonomy are appropriate criteria for analyzing and evaluating technological developments. Paraphrasing Karl Marx, they quip that cause-and-effect reasoning “is the opiate of the rational elite” (p. 71).
An especially useful section explicates strategies appropriate at Levels II and III. Since results cannot be predicted, the authors underscore the importance of including diverse perspectives (any one of which will necessarily be partial), developing alternatives and options (allowing for more flexibility when unexpected developments occur), and intervention by small steps that can be quickly adjusted as developments require (which the authors call, not altogether felicitously, “muddling” [p. 183]). Going along with these strategies is a multi-level ethics, ranging from shop-floor professional ethics (such as not fudging the data), to the second-level ethic of being attentive to the institutional contexts in which technological projects are embedded, to the “macroethics” (p. 181) of Level III, which are summarized in the injunction, “That which you believe most deeply, you must distrust most strongly” (p. 187).
The book has the vices of its virtues. While the argument is clear and accessible, it downplays or ignores the large body of research on complexity theory, complex adaptive systems, and emergence, along with the insights this research yields into how to construct and study such systems using computer simulations, evolutionary programs, and genetic algorithms. Moreover, the rhetoric of “levels” works against the realization that these categories are not strung out along a linear spectrum but are recursively embedded within one another. Surely the authors realize this, but they keep being betrayed by their terminology: for example, when they write about “fuzzy boundaries,” the term makes it seem that the problem is the inability precisely to delineate boundaries rather than that feedback loops interconnect all the levels simultaneously. Nevertheless, the book makes a compelling case that debates over transhumanism must be recontextualized beyond assumptions of individual autonomy and agency...