Abstract

Recent discussions of rational deliberation in science present us with two extremes: unbounded optimism and sober pessimism. Helen Longino (1990) sees rational deliberation as the foundation of scientific objectivity. Miriam Solomon (1991) thinks it is overrated. Indeed, she has recently argued (2006) that group deliberation is detrimental to empirical success because it often involves groupthink and the suppression of dissent. But we need not embrace either extreme. To determine the value of rational deliberation we need to look more closely at the practice and practitioners of science. I offer a closer look here by exploring the joint agency of small research teams. Although there are factors that contribute to the suppression of dissent in group contexts, a closer look at the literature on group dynamics suggests that there are ways to mitigate the effects of groupthink. Thus, there is reason to be cautiously optimistic about the value of rational deliberation within certain scientific contexts.

Rational deliberation either by individuals or groups is widely assumed to be a valuable tool for arriving at good decisions and, in the context of science, a valuable tool for empirical success. But a recent paper by Miriam Solomon (2006) challenges this widely held assumption. She argues that group deliberation in scientific contexts is often detrimental to the success of science. Relying on recent work by James Surowiecki (2004), Solomon suggests that aggregation of individual decisions, rather than deliberation to reach a consensus, can produce better decisions. Aggregation of individual decisions will allow for the input of every participant and hence would include dissenting opinions.

This line of reasoning is in keeping with Solomon's social empiricism (2001) which emphasizes that the rationality of the individual scientist is not what matters to empirical success and which makes a central virtue of dissent. As Solomon puts it: "What matters is not how individual scientists reason—it's not the thought that counts—but what the aggregate community of scientists does" (2005, 9). Empirical success, according to Solomon, comes from evaluation and change at the "systems" level and change may come in the form of dissent rather than consensus. Aggregation of individual decisions is just another way in which a systems level approach can be implemented to make room for dissent.

As Solomon herself admits, this approach to rational deliberation, and to group deliberation in particular, appears to be rather pessimistic about the value of rational deliberation.1 Contrast her view with Helen Longino's critical contextual empiricism (1990). According to Longino, rational deliberation between individuals and within [End Page 37] groups is the foundation of scientific objectivity. Although individual scientists are biased in a variety of ways, these biases can be overcome by promoting critical dialogue between scientists. As long as the scientific community adheres to the norms of critical contextual empiricism, scientific objectivity can be achieved. Compared to Solomon, Longino appears to be positively optimistic about the role of rational deliberation in science.

There is room for a middle ground here between the naïve optimism of Longino and the bleak pessimism of Solomon. If we are going to capture the conditions for objectivity in science as Longino attempts to do, and I think we ought to do, we need to look much more closely at the way in which rational deliberation takes place in science and, in the case of group deliberation, the types of groups engaged in it. Although research groups are ubiquitous across the sciences, they vary widely in their size and organizational structure. The norms of scientific objectivity will have to be more fine-grained to reflect the variety in scientific practice and practitioners. And if we agree with Solomon, as I do, that dissent is an epistemic virtue, we need to look more closely at the context of deliberation in order to uncover the structures that inhibit and promote dissent.

I provide a closer look in this paper. My focus is on small research groups and the deliberation that takes place within them. Although Solomon is correct—there are ineliminable factors that contribute to the suppression of dissent in group contexts—the literature on group dynamics suggests that there are ways to mitigate the effects of groupthink. Thus there is reason to be more optimistic about the role of group deliberation in science.

In section I, I begin with a discussion of groupthink and the nature of social cohesion. As Janis (1972) pointed out one of the major causes of groupthink and the resulting failure of group deliberation is social cohesion. What is social cohesion? Unfortunately, the social psychological literature is less than helpful in answering this question. I find Janis' definition of social cohesion and subsequent attempts to clarify the term wanting. I suggest that social cohesion is, in part, a function of the normative and intentional structure of joint action. This normative and intentional structure has been analyzed by a variety of action theorists over the past decade, but I will focus my attention on Michael Bratman's account of shared cooperative activity. This account provides some interesting resources with which to consider the nature of scientific teamwork. In section II, I explore the possibilities for dissent within the context of scientific teams. Viewing social cohesion as a function of the intentional and normative structure of teamwork provides us a better understanding of why small groups such as scientific research teams are subject to the effects of groupthink. Further, I shall argue that, although there is some room for dissent in these contexts, the amount of dissent and the target of dissent are constrained by the norms governing joint agency. I will explore analogies between the interpersonal and intrapersonal in order to make this case. In section III, I discuss some of Janis's suggestions for avoiding groupthink and whether or not such suggestions could be implemented within a scientific research team. In the end, adopting less cohesive forms of joint agency may be a better way of facilitating group deliberation and making room for dissent. [End Page 38]

1. Groupthink, Social Cohesion, and Shared Cooperative Activity

Janis's theory of groupthink has produced a great deal of interest and research on group decision-making and dynamics since its publication in 1972. Groups affected by groupthink ignore alternatives and tend to take irrational actions that dehumanize other groups. It is likely to occur, according to Janis, when groups are "highly cohesive" and when they are under considerable pressure to make a decision. Symptoms of groupthink include:

  1. 1. The illusion of invulnerability: this creates excessive optimism that encourages taking extreme risks.

  2. 2. Collective rationalization: members discount warnings and do not reconsider their assumptions.

  3. 3. Belief in the inherent morality of the group: members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.

  4. 4. Stereotyped views of out-groups: negative views of 'enemy' make effective responses to conflict seem unnecessary.

  5. 5. Direct pressure on dissenters—members are under pressure not to express arguments against any of the group's view.

  6. 6. Self-censorship—doubts and deviations from the perceived group consensus are not expressed.

  7. 7. Illusion of unanimity—the majority view and judgments are assumed to be unanimous.

  8. 8. Self-appointed guards—members protect the group and the leader from information that is problematic or contradictory to their view.

Although there are a variety of factors that might cause a group to exhibit these symptoms, the main factor to which Janis appeals in his explanation of groupthink is social cohesion. The more cohesive a group is, the more likely it is to suffer from groupthink. The term, however, has remained rather obscure and for this reason attempts to measure and study group phenomena such as groupthink have been hindered (Mudrack, 1989).2 Social cohesion has been described as the "social glue" that binds groups together—the "stick togetherness." Janis defines the concept in the following way: "members' positive valuation of the group and their motivation to continue to belong to it" (1972, 4). Those following Janis have attempted to flesh out the notion of "positive valuation" and "motivation" in terms of the notion of "attraction" and equated cohesion with other constructs such as "group spirit," "bonds of impersonal attraction," "affective bonds," "sense of belonging," and "sense of we-ness" (Mudrack, 1989; Evans and Dion 1991).

What is striking about these definitions of cohesion is their phenomenological character. The focus is on the "feelings" of group members. It isn't difficult to see why this might pose problems for attempts to study and measure social cohesion and its effects. How does one measure a "sense of we-ness?" It also seems to put the cart before the [End Page 39] horse. Although there are no doubt motivational and affective elements that contribute to a group's ability to engage in joint action and remain stable over time (and perhaps these elements contribute to groupthink and so on), these feelings arise in response to certain features either present in the group or the environment and develop over time as the group engages in action and deliberation. Far from providing a definition (or cause) of social cohesion, an individual's feelings of belongingness would seem to be the result of social cohesion.

I do not intend to offer a definition of social cohesion that would solve all of the experimental problems posed by current accounts. But I think we can get beyond the vague notion of "sticking together" by exploring the accounts of joint agency on offer by action theorists. Social cohesion does not arise spontaneously among a community of people. It arises in groups that act and think together. I want to suggest that social cohesion is, in part, a function of the intentional and normative structure of joint agency. Once this structure is understood we can see why individuals within certain types of groups would exhibit a sense of we-ness or belongingness and why the symptoms of groupthink might exhibit themselves in these sorts of groups. In addition to all the "feelings" one might have for their fellow group members that may suppress dissent there are constitutive features of group agency that will produce pressures to conform.

Let's turn, then, to consider one account of the intentional and normative structure of joint agency. Michael Bratman (1999) has provided an analysis of what he calls shared cooperative activity. Consider the following example: Sue and Kate are making a quilt together. This is something they do. That is, it is an intentional action. And it is something that they do. That is, they are doing it together. Why can't we explain this intentional action in terms of Sue's intention to make a quilt and Kate's intention to make a quilt, perhaps adding the condition of common knowledge? Even if Sue and Kate know of the other's intentions, this does not seem to be enough to guarantee that they are making the quilt together. After all, these intentional states could be in place in a case in which Sue and Kate are working on completely different quilts. In what sense would they be making the quilt together? The need to distinguish joint actions from actions performed in concert or in tandem suggests that what is necessary for joint action are individual intentions that are semantically shared in the sense that they share a common content. Sue and Kate each have an intention with the following content: I intend that we make a quilt.

Further, fulfilling this shared intention will require that they each perform various actions and these actions will be informed by the shared intention to make the quilt together. Kate's sub-plan to make the quilt blocks in calico and Sue's plan to use a butterfly stitch must mesh in that they cannot be such that they would prevent each from pursuing these sub-plans. If Kate plans to make a twin size quilt and Sue plans to make a queen size quilt, these sub-plans do not mesh. Each agent in a joint action will attempt to be responsive to the intentions and actions of the other, knowing that the other is attempting to be similarly responsive.

In addition to the shared intention that we J and the mutual responsiveness of individual participants, there must be, according to Bratman, a commitment to the joint activity on the part of the participants. As Bratman puts it, "their mutual responsiveness is in the pursuit of this commitment" (1999, 95). They are mutually responsive to the [End Page 40] intentions of other participants because they aim at fulfilling their commitment to the joint action. If Sue and Kate are not committed to making the quilt together then the joint action will cease the moment there is any conflict between them. The commitment to joint action maintains a form of stability of agency.

A commitment to the joint action presupposes that the participants are also willing to provide mutual support. If Sue and Kate are committed to making a quilt together then when either one needs help in fulfilling his or her part of the joint action, the other must be willing to provide support if possible. It would be an odd sort of collaboration, indeed no collaboration at all, if Kate refused to give Sue the pattern that she had in her possession. If one is committed to doing something together one is committed to supporting (to the best of one's ability) the actions that comprise the joint action.3 These commitments (commitment to the joint activity and commitment to mutual support) are known to each of the participants under conditions of common knowledge.

In order to understand the dynamic nature of joint action and its stability over time Bratman has also introduced the notion of shared values (2004, 2006). At the individual level intentions and plans involve norms of consistency, coherency, and stability. Other things being equal, a person's intentions and plans are to be, taken together, consistent with each other and with her beliefs about the world. They are partial in the sense that they will be filled out as the agent progresses and their filling out will be governed by the constraint that they cohere with other intentions and plans the agent has. Finally, although intentions and plans are not irrevocable, they must maintain a certain amount of stability. If a person continually rethinks their plans and intentions or overturns them when there is any conflict or difficulty, agency would be undermined. The stability of intentions and plans allows them to play a structuring role in our practical rationality. Prior plans and intentions structure our reasoning about means, and they do this in ways that are responsive to the requirements of consistency and coherence.

Here is where values, or what Bratman calls "self-governing policies," enter. Self-governing policies tell us which considerations are given weight and how much weight in our practical deliberations.4 A person might, for example, have a policy of discounting or bracketing desires for money in her practical reasoning; another person might have a policy of sexual abstinence, so she does not give her sexual desires weight in her deliberations. These "valuings" aid stability in agency and allow conflicts between intentions and sub-plans to be resolved.

Now consider again the shared cooperative activity of making a quilt together. There is, according to Bratman, an intentional and normative structure that organizes and informs this joint action and the individual actions of which it is comprised. This involves a shared intention with a common content, mutual responsiveness to this shared intention, a semantic interlocking of intentions (that is, the intentions are about each other's role in thought and action), a commitment to mutual support, and public accessibility of this structure. But like individual intention, shared intention must be responsive to the norms of coherency, consistency, and stability. As Bratman describes it:

My intention that we J by way of your analogous intention and meshing sub-plans imposes rational pressure on me, as times goes by, to fill in my sub-plans in ways that fit with yours as [End Page 41] you fill in your sub-plans; and vice versa. This pressure derives from the basic rational pressure on me for means-end coherent and consistent plans, given the ways in which your intentions enter into the content of my intention

(2006, 3).

Like individual intentions and plans, shared values need to be in place to maintain stability in the face of conflict. When Sue's sub-plans conflict with Kate's they will appeal to prior shared values in order to arbitrate and determine which course of action deserves greater weight. They might, for instance, appeal to their shared value of giving weight to certain forms of quilting, say, ones of historical significance rather than others, and this shared value will help them arbitrate disputes about stitches, colors, and so on.5 Just as values will play a role in an individual's deliberations about what they ought to do, so too a group's shared values will play a role in group deliberation.

Shared cooperative activity, then, involves a complex of individual intentional states that are interdependent, along with commitments, mutual responsiveness, and shared values. The sorts of groups identified by social psychologists as exhibiting a high level of social cohesion are precisely those groups that engage in shared cooperative activity—committees, task groups, sales teams, athletic teams, and scientific research teams. Social cohesion is, in part, a function of the ways in which intentions, values, and commitments are shared. To the extent that a group maintains stability of agency by forming shared values and being mutually responsive to the intentions and commitments of the participants, it will be more socially cohesive than those groups whose agency is less interdependent.

We can see now why participants in a highly cohesive group might experience feelings of we-ness. Shared values, intentions, and commitments will be experienced as "ours." We can also see why social cohesion is beneficial. The stability of shared cooperative activity makes it possible for groups to progress and achieve long-term goals without having to rethink every action and sub-plan. It provides for a unified agency rather than a group whose members are constantly at odds with one another. The benefits of high social cohesion are well documented in the research on group dynamics: highly cohesive groups are more productive than those groups that lack cohesiveness, and they maintain their productivity over time (Mudrack, 1989; Mullen and Cooper, 1994). High social cohesion also provides psychological benefits to group members. But it is this unified agency that often contributes to the suppression of dissent within the context of group deliberation and hence to poor decision making.

In the next section, I explore the impact of social cohesion in the context of small scientific research teams. Let me be clear that the intentional and normative structure of shared collaborative activity is just one of the factors that contributes to the suppression of dissent. There are a variety of social factors that may also contribute, and on which I will have little to say here.

11. Scientific Teamwork as a Form of Shared Cooperative Activity: Is There Room For Dissent?

Teamwork in science is ubiquitious and it varies enormously in terms of the number of participants, the social structure (e.g., hierarchical vs. egalitarian), and the sorts of work the teams engage in (group deliberation, experimentation, data analysis). Bratman's [End Page 42] framework allows us, however, to make some general distinctions between the types of group agency found in the sciences and the level of social cohesion. Shared cooperative activity is exhibited in small research teams such as those found in microbiology, cognitive psychology, and neuroscience, among others. These groups engage not just in joint deliberation but joint action. They coordinate their intentions as well as their actions. They should be distinguished, and can be rather well distinguished, using Bratman's theory, from less cohesive groups such as the large collaboratives6 found in physics. These collaborative networks are made up of several different teams, and in this case often lack the responsiveness in intention and action that is found in smaller groups. We might view these collaboratives as engaging in what Bratman has called pre-packaged cooperation (1999, 106). They undertake planning that is a form of shared cooperative activity, but the work is done by individuals or small research groups within the collaborative. We can also contrast both the structure of collaboratives and small research teams with more competitive based structures such as those found in medicine. The medical community might be conceived as engaging in mere cooperation rather than anything like shared cooperative activity. There is even in these cases a form of joint intentional agency, however. Even competitors in a game of chess share the intention to play a game of chess together and must be mutually responsive to the intentions and sub-plans of their competition. But such competitors lack a commitment to mutual support (Bratman, 1999, 107).

How does viewing scientific research teams as engaged in shared cooperative activity help us to assess the potential for groupthink and the role of dissent in these collaborative contexts?

Given the normative and intentional structure of teamwork, it is clear why there will be pressures to suppress dissent within the context of a team's deliberation. In addition to all of the biases or decision vectors at play in the deliberations of a team, there are these additional constraints brought about by the very nature of the agency in which they are engaged. Raising alternative viewpoints or challenging the team's findings will cause the group to rethink its shared intentions, sub-plans, and shared values. It may cause participants to question the dissenter's commitment to mutual support and their commitment to the joint activity or project. Such a rethinking has the potential to produce a deterioration of the group's ability to function as a unit. Although such a rethinking is often the very thing that is needed, the normative pressure for stability, coherency, and consistency will often subvert such discussions. So viewing social cohesion in the way I suggest explains why scientific teams might be subject to the symptoms of groupthink; but need they be? Is there something about this form of agency that precludes dissent?

I suggest that dissent can be tolerated within a research team only to the extent to which it can be tolerated within an individual. Bratman's account of shared cooperative activity relies heavily on the analogies between individual rationality and group rationality. Just as individual agency is constrained by the norms of consistency, coherency, and stability, so too shared cooperative activity and (its manifestation as teamwork in science) is constrained by these norms. It will be helpful, then, to pursue the intrapersonal and interpersonal analogy with respect to the role these norms play in scientific teamwork.

First, I propose a clarification of the concept of dissent. We can identify two forms. [End Page 43] The first form is often associated with disagreement. Call this weak dissent. It is this form of dissent that Solomon (2001), Longino (1990), and others seem to focus on in their recent discussions of deliberation in science. The second form of dissent is akin to political dissent in the sense that it involves actively pursuing an alternative theory and attempting to undermine the established theory. Copernicus wasn't simply disagreeing with Ptolemy; he was actively trying to prove that the Ptolemaic system was false. Call this strong dissent.

Consider the analogue of weak dissent in an individual's personal deliberations. There seems to be nothing within the notion of individual agency that prohibits disagreement from playing a role in the individual's own deliberations. Indeed, rational agency seems to require that individuals engage in critical deliberation with themselves and in which they consider alternative points of view. In effect, rational agency requires that individuals offer themselves dissenting opinions in the course of their private deliberations. Just as weak dissent can and should play a role in the deliberations of an individual, there is nothing about the nature of teamwork that would prohibit weak dissent from being implemented in the course of the deliberations of a team. The values of the individual make it possible to consider alternative courses of action and arbitrate between them. Likewise, a team's shared values will provide the backdrop for productive group deliberation. Disagreement will provide a way of assessing whether sub-plans mesh, rethinking shared values, and fine-tuning the team's plans.

But the analogy between team agency and individual agency also suggests that there is a limit to the amount of weak dissent that can be tolerated within a team. Consider an individual who subjects every course of action to critical scrutiny or who begins to disagree with herself about longstanding commitments she has had. An individual who adopted a self-regulating policy to give weight to a dissenting opinion within the course of her own personal deliberations would run the risk of undermining her own agency. Such a policy would require a re-thinking of prior plans, intentions, and values. Such a rethinking, if it were to occur all the time, would seriously undermine the stability of her agency, not to mention its efficiency and productivity. If every deliberative context required the representation of dissent, individual agency would be undermined. Such an agent would exhibit a "fractured self" and would find it difficult to pursue any course of action.7 Likewise, teams cannot implement weak dissent in every deliberative context; to do so would undermine the ability of the team to work together.

What of strong dissent? It is tempting to think that strong dissent cannot be tolerated at all in the context of teamwork. After all, a team is a group working together and strong dissent would involve working against one another. But this would be too hasty, for the distinction between strong dissent and weak dissent needs to be qualified by noting that what one dissents from (the target of dissent) is as important as the strength of dissent.8 As we have seen, shared cooperative activity involves shared intentions or goals and shared values. When strong dissent challenges the "core" values on which cooperation and coordination of action takes place, then it may indeed undermine the ability of the group to act. If, for instance, a participant on a research team suddenly rejects one of the core values of the group, responsiveness to counterevidence, for instance, then strong dissent cannot be tolerated. But strong dissent could be tolerated if its target is something more [End Page 44] peripheral to the "core" values that sustain the agency of the group. Consider a group of particle physicists in which there is strong dissent regarding what to name a particular particle. We can imagine that certain group members are lobbying to gain support for their own suggested name and undermining the suggestions of other members. Surely this sort of strong dissent could be tolerated, as its target is epistemically unimportant. If, however, there were strong dissent within the group on how to design the detector they will use for experimentation, it would undermine the ability of the group to engage in its research. Given the importance of detector design in particle physics, strong dissent concerning this matter would undermine the ability of the group to work together.9 The sub-plans developed by individuals or by sub-groups within the group would not mesh.

So, although there is in principle room for dissent (both weak and strong) in the context of scientific teams, the amount of dissent and the target of dissent are limited by the nature of joint agency. Furthermore, given the normative and intentional structure of this agency we can see why research teams might suppress even a minimal amount of dissent. Is there a way of making scientific teams more open to dissent in the context of group deliberation without introducing forms of dissent that undermine group agency? In the next section, I explore Janis's own suggestions for avoiding groupthink and consider whether they could be implemented in the context of scientific research.

111. Making Room for Dissent

Janis's theory of groupthink was originally developed to understand bad decision-making in the context of governmental policy groups. Although there have been attempts to extend it to a variety of other groups (including business teams, committees, and boards) there is very little empirical work on the phenomenon in science and how it might be mitigated within the context of scientific research groups. Thus, what I have to say in this section is somewhat speculative. It points, however, to important areas in need of further empirical investigation.

Janis's solutions involve altering the structural and situational contexts of highly cohesive groups. As I mentioned, high social cohesion is often an indicator of the presence of groupthink, but this does not mean that all socially cohesive groups are necessarily subject to groupthink. There are features of the group structure and the context of group deliberation that contribute to poor decision-making in the group. Janis identifies the following structural features: insulation of the group, lack of decision-making procedures, and homogeneity of members' social and ideological background. With respect to context Janis points out that time pressures often contribute to poor decision-making.

To mitigate the effects of these features while preserving the benefits of high social cohesion, Janis' makes the following suggestions:

  1. 9. The role of critical evaluator should be assigned to every member. Instead of representing their own narrow area of expertise, members would take responsibility for finding their own solution to the problem and then presenting it to the group (1972, 209). [End Page 45]

  2. . Introduce a "devil's advocate" into the group that would raise alternative points of view, criticisms, objections, and so on (1972, 214-215).

  3. . Group leaders should routinely leave the deliberative context so that participants feel free to contribute to deliberations (1972, 218).

  4. . Groups should break into sub-groups that work simultaneously on the same issue. Each group can then draw on the expertise of trusted subordinates who are encouraged to give their advice freely. The subgroups then come back together and compare notes (1972, 213).

Can these suggestions be implemented within the context of scientific teamwork? Consider Janis' first suggestion that team members take on more cognitive responsibility in the sense of critically evaluating the whole project or plan rather than just offering information from their own expertise. Although this may be possible in teams where members are from similar disciplines and have similar expertise, our epistemic dependence on the expertise of others may prevent implementation of this suggestion. Team research is so vital in the sciences precisely because in many cases no one individual can be an expert in all the domains needed to research complex phenomenon. Consider cancer research and treatment.10 An oncology team is made up of radiologists, chemotherapists, oncologists, and in some cases surgeons and physical therapists. Each contributes unique information to the care and treatment of patients and to their overall research into the causes and cures of cancer, information that may not be available to others in the group. The oncologist cannot be expected to become an expert in the field of chemotherapy, and although her experience in treating cancer will allow her to gain many insights into the field, she will not be able to assess the overall treatment of the patient. Sometimes epistemic dependence makes us unable to see the whole picture. Whether this suggestion will be able to be implemented within a team, then, will depend in part on the domain of inquiry.

The suggestion that groups introduce a devil's advocate is tantamount to instituting a form of weak dissent. I have said there is nothing about the structure of team research that would prohibit the introduction of such a "gadfly," but it is not clear why the dissenting opinion of an outsider would be any less vulnerable to suppression than the dissenting opinions of group members. The team might institute a shared value in which they agree to let dissent (in the form of the devil's advocate) play a role in their group deliberations. This strategy would create a space for the dissenting opinions of the advocate within the structure of the team's agency; but, as I have argued above, specific limits would have to be set on its implementation. A devil's advocate might be beneficial during crucial deliberations, but a policy that implemented weak dissent in every deliberative context would undermine the agency of the group.

The suggestion that group leaders leave the deliberative context to allow subordinate members freer expression will work only in those teams in which there is a team leader or some hierarchical structure. In some scientific domains researchers are on equal footing and there is no discernible leader. This suggestion seems well suited, for instance, to scientific teams comprised of senior faculty and graduate students who may feel social pressures (in addition to the normative constraints involved in teamwork) to suppress their dissenting opinions. [End Page 46]

The option of splitting the team into subgroups that work on problems simultaneously may be the most promising approach. This introduces a slightly less cohesive form of agency according to my analysis: a form of pre-packaged cooperation. The group may engage in shared cooperative activity while discussing the problem or research agenda, but then individual members might disperse into smaller groups to work on the problem separately and return later to engage in group deliberation with the original group. The subgroups might also engage in deliberation with other subgroups, forcing teams to consider the dissenting opinions of out-groups.

There is some empirical evidence to suggest that the subgroup approach will aid in fostering dissent and facilitating successful group deliberation. Consider the following example. In a space shuttle mission, space shuttle ground support involves a variety of research, engineering and support teams. These teams share an overall goal or value, perhaps the goal of a successful shuttle mission, and might be thought of as a large scientific group or community. However, each sub-team has distinct sub-goals, plans, responsibilities, resources, and authority, which lead them to approach these overall goals from different perspectives or shared values. Watts et al. (1996, 1997) studied the coordination across these functionally distinct teams during actual anomalies in shuttle operations. They described a pattern of distributed cognition that provided a way to cope with anomalies during the shuttle mission. Each sub-team developed their own assessment and response strategy and the consequences of their plan for the mission. The assessment and strategy were developed in a meeting with team members and from within the team perspective. These perspectives were then shared with other groups in a series of coordinative meetings. Preparing for a possible critique and actually confronting another group's perspective on the situation revealed inaccuracies, gaps, uncertainties, and conflicts. The process of sharing each sub-team's assessment stimulated other possibilities, constraints, and side effects.

This is just one example of how Janis' suggestion of fostering group deliberation via sub-groups might be effective in the context of science. Of course, the success of this approach depends on being able to avoid groupthink within the context of the subgroups. Groupthink doesn't just cause the suppression of dissent within a group. It also produces a situation in which the out-group is ignored and even demonized. It is not just that, within highly cohesive groups, individuals fail to listen to alternatives offered by in-group members. Individuals fail to listen to members outside one's group and they often increase their cohesion by viewing out-groups in negative ways. The more stable, the more cohesive the group is in terms of its agency, the more likely it is to ignore out-groups and reinforce its own conclusions. Thus, fostering dissent between teams of researchers will work only if the teams are not already infected with groupthink.11

An institutional policy that promotes dissent between teams and within teams may be the answer. Such a suggestion is present in the work of Solomon (2001), Kitcher (1990), and others for whom the distribution of cognitive labor has been a recent focus. As Kitcher puts it:

The very factors that are frequently thought of as interfering with the rational pursuit of science—the thirst for fame and fortune, for example—might actually play a constructive role in our community epistemic projects, enabling us, as a group, to do far better than we would [End Page 47] have done had we behaved like independent epistemically rational individuals. Or, to draw the moral a bit differently, social institutions within science might take advantage of our personal foibles to channel our efforts toward community goals rather than toward the epistemic ends that we might set for ourselves as individuals

(1990, 16).

The distribution of funding is an obvious mechanism for promoting dissent, as is that of organizing conferences in which research teams are encouraged to present their different approaches to a common problem. Funding opportunities for research projects on theories that are not represented within the scientific culture might encourage pursuit of alternative lines of inquiry.

To the extent that Janis' suggestions can be implemented in the context of sciences that depend on teamwork, there will be ways to mitigate the effects of groupthink and salvage the benefits of group deliberation. Alternatively, group deliberation may function best among members working in larger groups or collaboratives, or more loosely organized work environments. When participants are not continually required to be responsive to the intentions and actions of other participants there may be more room for presenting alternative points of view. Lessening social cohesion will lessen the normative and intentional constraints identified by Bratman's theory and may allow joint deliberation to function more efficiently. Indeed, group deliberation among competitors rather than team members may be the most effective form. As Kitcher (1990) and others have suggested a more competitive scientific community may actually produce better results in some cases than others. Although competition is still a form of joint intentional agency, as I have argued, it fosters a low level of cohesion that may help avoid the various detrimental effects noted by Janis. A move towards less cohesive groups in science will mean sacrificing the benefits of high social cohesion (productivity and efficiency, for instance).12 This may be a sacrifice we need to make, however, to insure that group deliberation is a form of rational deliberation.

Conclusion

In her comments on Solomon's Groupthink versus The Wisdom of the Crowds (2006), Alison Wylie writes:

Rather than reject deliberative processes in favor of aggregative techniques, the norms of epistemic rationality inspired by these processes should incorporate a detailed, empirically grounded understanding of the conditions under which group deliberation can work well, and the conditions under which it manifestly fails

(2006, 47).

Following this suggestion, I have attempted to show that group deliberation in the context of science need not be thrown out with the bathwater. Although the high level of social cohesion in scientific research teams will inhibit dissent to some degree, Janis's suggestions for avoiding groupthink may prove useful in allowing for more productive group deliberation within teams and among teams. Further, I've suggested that less cohesive forms of joint agency will eliminate the inherent normative constraints found in team research and this may provide a more productive environment for deliberation as well.

The notion of scientific communities or groups has played a large role in discussions of [End Page 48] scientific rationality, objectivity, and the distribution of cognitive labor (Kitcher, 1990, Longino, 1990, Solomon, 2001, Nelson 1990) but little has been done to distinguish between various forms of groups or communities and the effects of these groups on scientific knowledge. If we are committed to a naturalized epistemology that is also normative, then we need to explore the nature of these groups more closely. I hope my own exploration has been fruitful and will encourage others to explore the nature of scientific groups in more depth.

Deborah Perron Tollefsen

Deborah Tollefsen received her Ph.D from Ohio State University in 2002. Her research and teaching interests are in the philosophy of mind, epistemology, and action theory. Recent publications include: “Let’s Pretend! Joint action and young children” in Philosophy of the Social Sciences and “The rationality of collective guilt” in Midwest Studies in Philosophy.

Acknowledgment

A version of this paper was presented during a session on scientific collaboration at the Philosophy of Science Association meeting in Austin, TX in 2004. I am very grateful to Kent Staley the organizer of the session and the audience for their helpful comments. I would also like to thank David Henderson, Sarah Miller, Alison Wylie, Alvin Goldman, and James Brown for their helpful comments.

References

Bratman, M. (1999). “Shared cooperative activity”. In Faces of Intention. Cambridge: Cambridge University Press, pp. 93–109.
——— (2004). “Shared valuing and frameworks for practical reasoning”. In S. Schemer and M. Smith (eds.), Reason and Value: Themesfrom the Moral Philosophy of Joseph Raz. Oxford: Oxford University Press.
——— (2006). “Dynamics of sociality”. Midwest Studies in Philosophy, vol. 30, Shared Intentions and Collective Responsibility, pp. 1–15.
Carron, A.V. and S.R. Bray. (2002). “Team cohesion and team success in sport”. Journal of Sport Sciences 20: 119–226.
Dyaram, L. and T. J. Kamalanabhan. (2005). “Unearthed: the other side of group cohesiveness”. Journal of Social Science 10(3): 185–90.
Evans, C. R. and K. L. Dion. (1992). “Group cohesion and performance: a meta analysis”. Small Group Research 23 (2): 242–50.
Janis, I. (1972). Victims of Groupthink. Boston, MA: Houghton Mifflin.
Kitcher, P. (1990). “Division of Cognitive Labor”. The Journal of Philosophy 87(1): 5–22.
Longino, H. (1990). Science as Social Knowledge. Princeton, NJ: Princeton University Press.
Mudrack, P.E. (1989). “Group cohesiveness and productivity: a closer look”. Human Relations 42(9): 771–85.
Mullen, B. and C. Cooper. (1994). “The relationship between group cohesiveness and performance: An integration”. Psychological Bulletin 115(2): 210–27.
Nelson, L. H. (1990). Who Knows: From Quine to Feminist Empiricism. Philadelphia: Temple University Press.
Solomon, M. (2001). Social Empiricism. Cambridge, MA: MIT press.
——— (2005). “Social epistemology of science.” Paper for Inquiry conference on developing a consensus research agenda. Rutgers University, Feb. 16–18.
——— (2006). “Groupthink vs. The Wisdom of the Crowds: The social epistemology of deliberation and dissent”. Southern Journal of Philosophy, volume 44, Supplement: 28–42.
Staley, K. (2004). The Evidence for the Top Quark. Cambridge: Cambridge University Press.
Traweek, S. (1992). Beamtimes and Lifetimes: The World of High Energy Physicists. Cambridge, MA: Harvard University Press.
Watts, J.C., Woods, D.D., and Patterson, E.S. (1996). “Functionally distributed coordination during anomaly response in space shuttle mission control.” In Human Interaction with Complex Systems. Los Alamitos, CA: IEEE Computer Society Press, pp. 68–78.
Watts, J.C., Woods, D.D. and Patterson, E.S. (1997). “A cognitive analysis of functionally distributed anomaly response in space shuttle mission control”. (CSEL Report 1997-TR-O2) The Ohio State University, Cognitive Systems Engineering Laboratory.
Wylie, A. (2006). “Socially naturalized norms of epistemic rationality: aggregation and deliberation”. Southern Journal of Philosophy, volume 44, Supplement: 43–8.

Notes

1. “When I read Longino’s work, my own work seems cynical in contrast. I do not expect scientists to be able to establish that much of a democratic community” (Solomon, 2005, 14).

2. “There is no clear operational definition of this construct and no clear strategy to measure it” (Dyaram and Kamalanabhan 2005).

3. Bratman combines these three features: mutual responsiveness, commitment to joint activity, and commitment to mutual support, in order to provide the following analysis of shared cooperative activity (SCA):

Our action of X is a SCA only if:

(1) (a) (i) I intend that we J.

(1) (a) (ii) I intend the we J in accordance with and because of meshing subplans of (1) (a)

(i) and (1) (b) (i).

(1) (b) (i) You intend that we J.

(1) (b) (ii) You intend that we J in accordance with and because of meshing subplans of (1) (a) (i) and (1) (b) (i).

(1) (c) The intentions in (1) (a) and in (1) (b) are not coerced by the other participant.

(1) (d) The intentions in (1) (a) and (1) (b) are minimally cooperatively stable. Stability of intention ensures that there is a commitment to help in relevant circumstances.

(2) It is common knowledge between us that (1). (1999, 105)

4. As Bratman put it:

One’s intentions concerning specific activities normally pose, in light of demands for coherence, problems of how to fill in one’s associated partial plans of action with specifications of means and the like; and they constrain solutions to those problems by way of demands for consistency. The work-providing role of these intentions consists in part by posing problems and constraining solutions. In filling in one’s plans, however, one will typically need to weigh various pros and cons concerning alternative means or the like. And here one’s self-governing policies can provide a relevant background framework of commitments to treating certain considerations as having weight or other kinds of justifying significance in such deliberations. (2004, 13–14)

5. Bratman defines shared valuing in the following way:

For us to value X is, in a basic case, for us to have a shared policy to treat X as justifying in shared deliberation in which we intend to engage and by which we intend to be guided. So, in the basic case, you and I have a shared policy of treating X as justifying in relevant shared deliberation when, in a public context, we each have a policy that favors that treatment by us by way of meshing sub-plans, these policies are interlocking, and their persistence is appropriately interdependent and recognized as such.” (2006, 3)

6. See Staley (2004) for an extremely interesting discussion of the collaboratives involved in the discovery of the top quark.

7. Compare Bratman’s (2004, 15) discussion of the same phenomenon in the individual.

8. Thanks to Alison Wylie for pointing this out to me.

9. See Traweek (1992) for a discussion of the role of detectors in particle physics and a fascinating discussion of the social structure of particle physics.

10. Oncologists and those working on the treatment ofcancer rarely distinguish between treatment and research. Treating patients with cancer is an ongoing experimental endeavor.

11. A reviewer has suggested that there are many cases where groups that are probably guilty of groupthink manage to engage in productive deliberation with other groups. String theorists and Loop quantum gravity groups are forced to engage with one another at conferences in spite of being highly competitive with one another. This may be. But to the extent that their deliberation is productive and they do listen to one another it means that they have managed to overcome the detrimental effects of groupthink. I suggest in the next line that an institutional policy that forces groups to engage with one another may work to overcome the problem. Attending and presenting material at scientific conferences may be just the sort of “policy” that is needed and is in place in many fields of research. I would note also that String theorists and Loop quantum people are not engaged in a shared cooperative activity. They are working separately on similar issues. These sub-groups may be identified as part of a larger group endeavor, quantum physics, which is cooperative in nature and is marked by shared attitudes and goals.

12. See Carron and Bray (2002) for a discussion of the value of cohesion in athletic teams.

Share