In lieu of an abstract, here is a brief excerpt of the content:

80 Chapter 3 Gaming Trust Russell Hardin A long-standing and substantial body of work addresses problems of cooperation under several labels, including collective action, prisoner’s dilemma, and social dilemma. Much of this work has been experimental. The forms of the games in various experiments vary enormously, but most of them are prisoner’s dilemmas involving two or more persons. The literature focuses on isolated interactions as well as on social contexts in which cooperative (or uncooperative) play evolves over many interactions. Many of the researchers conducting experimental work have recently shifted their focus from explaining cooperation to modeling and measuring trust (also see chapter 8, this volume). Although it would be wrong to say that the presence of cooperation implies the presence of trust, it is commonly assumed in much of this work that successful cooperation indicates some degree of trust among the players. Recent efforts to introduce extra elements into the games may serve to test not merely for cooperation but also for trust. Such work befits the widely held view that trust and distrust are essentially rational. For example, James Coleman (1990, chapter 5) bases his account of trust on complex rational expectations. Two central elements are applied in a rational-choice account of trust: incentives for the trusted to fulfill the trust and knowledge to justify the truster’s trust. The second element is the truster’s knowledge of the trusted’s incentives or reasons to be trustworthy. It is, of course, the knowledge of the potential truster, not that of the theorist or social scientist who observes or analyzes the trust that is at issue. Because my supposed knowledge of you and your motivations can be mistaken, and because often your incentives might not lead you to cooperate with me Gaming Trust 81 (you might have competing interests that trump your trust), I typically run some risk of losing if I act cooperatively toward you. For present purposes, I assume that knowledge problems are subsumed under such risk assessments and do not analyze them separately, although they play a significant role in the discussions throughout and especially in the discussion of thick relationships in which, by implication , knowledge of potential partners in trust is rich and probably manifold. There are, of course, many uses of a term as attractive and seemingly good as trust. Some of these uses make it extrarational in some way. For example, in some accounts, trust is held to be founded in emotions or in moral commitments or dispositions (Hardin 2002, chapter 3; evolutionary accounts of such dispositions are discussed in chapters 4 and 5 of this volume). Many of these accounts seem likely to fit some instances of trust or—perhaps more often—of trustworthiness . Little or no experimental work on trust addresses such accounts of trust, although some of the survey and interview work on trust arguably does. Apart from directly asking people whether they trust, as in surveys, almost all efforts to measure trust involve game experiments in some way. In such experiments the rationality of trusting or distrusting is sometimes de facto assumed. In standard game theory, the players are commonly assumed to be rational, although in experimental games actual players often make quite diverse choices even when they face identical incentives. Therefore, one might suppose either that the subjects make errors or that rationality is not determinately defined in contexts of interactive choice. I think the latter conclusion is correct, and indeed there is no acceptable determinate theory that stipulates a best choice or a best set of choices of strategy in games in general (see further, Hardin forthcoming). Rather, there are many ostensible theories. In virtually all theories and accounts of trust, there is an element of expectations. Indeed, some accounts seemingly reduce trust to nothing more than expectations, as in such claims as “I trust that it will rain today,” although the “it” that I trust has none of the features of a person whom I might trust (see, for example, Barber 1983; Gambetta 1988, 217–18; Dasgupta 1988). Typically, accounts that go further and assume the rationality of trusting are actually accounts of the trustworthiness of others insofar as their trustworthiness is grounded in incentives of some kind that depend on the truster. Hence we may say that rational accounts typically suppose that the truster assumes that the trusted will most likely prove to be trustworthy because it will be in the trusted’s interest to act cooperatively with respect to the...

Share