From:
Perspectives in Biology and Medicine

Volume 49, Number 2, Spring 2006

pp. 305-308 | 10.1353/pbm.2006.0030

*Perspectives in Biology and Medicine* 49.2 (2006) 305-308

*The Nature of Scientific Evidence: Statistical, Philosophical, and Empirical Considerations.*Edited by Mark L. Taper and Subhash R. Lele. Chicago: Univ. of Chicago Press, 2004. Pp. 448. $85 (cloth), $30 (paper).

The use of statistics to assess the evidential import of data is a recent development in the history of science. It was not until the 1940s that statistics became a regular part of publications in medicine. Today, every well-designed study is expected to contain some kind of statistical analysis. But the increasing importance of statistics in many scientific fields has not necessarily improved the quality of the actual applications of statistical methods in published articles. In medicine, despite the introduction of statistical review in the major journals, peer review has found an almost constant rate of statistical shortcomings (60–70%) in articles published between 1951 and 1995.

That said, any book addressing general statistical issues for a
broad audience is most welcome. In *The Nature of Scientific
Evidence*, editors Mark Taper and Subhash Lele promise to
combine "statistical, philosophical and empirical considerations"
of evidence as "a summarization of data in the light of a model."
They invited authors in ecology, statistics, or philosophy, and
grouped their contributions into five sections: "Scientific
Process," "Logics of Evidence," "Realities of Nature," "Science,
Opinion, and Evidence," and "Models, Realities, and Evidence." Each
chapter is followed by one or two commentaries from peers and a
concluding rejoinder from the author. This format ensures
controversy throughout the book.

The book begins with a useful introduction to basic statistical and evidential concepts, followed by a contribution by the ecologist Brian Maurer, who outlines two different approaches, "inductive" and "deductive," to the study of ecological processes. The former is preferable for a field of inquiry that is still in its infancy, whereas the latter is suitable for fields that have a developed theory. This is an interesting suggestion, as in philosophy of science, inductivism and deductivism refer to two mutually exclusive accounts of scientific method. Maurer's considerations suggest that these are not all that exclusive; each has its proper applications. As one of the commentaries to Maurer's paper makes clear, however, the designations "inductive" and "deductive" are somewhat misleading, as both modes of research must use deductive as well as inductive inference; the two modes might more aptly be termed "exploratory" versus "theory-driven" research. Maurer argues that frequentist (Neyman-Pearson) statistical tests are more important in theory-driven science and that Bayesian methods have a (limited) use only in exploratory science. However, some of the other authors of the book take exception to this claim, as will some readers.

In fact, the volume is a living testimony to the current lack of consensus in the foundations of statistics and in the philosophy of science concerning the scope and adequacy of different statistical methods. One of the central disagreements concerns the question of whether an inferential procedure should result in the assignment of a probability value that some hypothesis under consideration is true (something like the rate that an ideally rational agent would use in a bet that the hypothesis is true). Bayesians think that Bayes's theorem can be used to calculate such probabilities. This approach, however, requires the specification of prior probabilities (i.e., the probability of the hypothesis before the evidence). As these prior probabilities are basically guesses, they bring a subjective element into scientific inference that many scientists and philosophers of science reject. "Frequentists" argue that only statistical testing in the manner of Neyman and Pearson makes scientific inference an objective affair, because the calculation of error probabilities (e.g., the probability that a type II statistical error occurs) requires no prior probabilities. In contrast to the Bayesian betting rates, error probabilities are fully objective (they measure the frequency of errors in a reference class of statistical tests). A third way is the so-called "likelihood paradigm." This approach derives from Bayesianism, but it emphasizes the strength by which a given body of data supports a hypothesis relative to another hypothesis, rather than the subjective probability of a hypothesis.

In addition to the conflict between "likelihoodists" and "frequentists," the book...

You must be logged in through an institution that subscribes to this journal or book to access the full text.