In lieu of an abstract, here is a brief excerpt of the content:

Reviews in American History 35.2 (2007) 165-175

Guns, Murder, and Probability:
How Can We Decide Which Figures to Trust?
Randolph Roth

As historians, we take pride in our openness to new ideas. That is why, I think, so many of us were enthusiastic about Michael Bellesiles's Arming America.1 Its thesis—that few Americans used, owned, or cared about guns before the mid-nineteenth century—was certainly congenial to scholars who supported gun control, because it suggested America's homicide problem was caused by an increase in gun ownership and the creation of a "gun culture." But Arming America had a more fundamental appeal for historians. It proved that they could make important discoveries, that they could confound received wisdom with bold hypotheses and careful research, and that they could play a crucial role in public life. Unfortunately, Arming America was wrong about early America.

We also take pride as historians in our skepticism. We admire critics who put the dominant school of thought on the defensive, as Robert Dykstra does in his witty essays on the "new" Western history.2 He believes that "this on-going postmodernist exercise" has done a disservice to the West by exaggerating its violence and that"[d]espite all the mythologizing, violent fatalities in the Old West tended to be rare rather than common." He blames this "misconception" about Western violence on the mathematical incompetence of recent scholars of Western violence, who commit what he calls "the fallacy of small numbers." These scholars "inflate" the West's violence by calculating homicide rates for small populations, a practice that can turn the lone homicide that occurred in Dodge City in 1880 (population 1,275) into an annual murder rate of 78.4 per 100,000 persons per year: 13 times the homicide rate in the United States today. The "new" Western scholars divide the number of homicides in a given year by the local population and multiply by 100,000, as the Federal Bureau of Investigation does:

Annual homicide rate = (# of homicides / population at risk) * 100,000
            78.4 = (1 / 1,275) * 100,000 [End Page 165]

What, Dykstra asks, would the homicide rate have been in Dodge City in 1880 had Henry Heck not quarreled with John "Honcho" Gill, or if Gill's bullet had missed? Zero.

Would it be possible to calculate homicide rates for small Western communities by tracking their murders over a number of years? Dykstra thinks not, because the total number of murders was still small: in Dodge City, a total of 15 from 1876 to 1885. How can a town where only 1.5 people were murdered each year be more violent than Miami, which lost 555 people in 1980, or Los Angeles, which lost 2,180?

Dykstra's common-sense approach has won praise from many scholars. Many have distanced themselves from recent studies of Western violence or have dismissed them altogether. But Dykstra's skepticism is misplaced. The "new" Western scholars he criticizes, such as Clare V. McKanna, Jr. and David Peterson Del Mar,3 are right. The West was extraordinarily homicidal.

Why do we as a profession have trouble deciding which quantitative studies are sound and which are not? Why do we mistake faulty studies for good ones and vice versa? Like everyone else, including statisticians, we are not very good at judging probabilities. Humans are proficient at many things, like counting, but as psychologists have discovered time and again, we have trouble figuring odds or risks or rates on the fly. Our common sense is a hindrance. That is why we need tools to help us determine which quantitative studies are wrong and which are right. Thanks to statisticians, those tools are at our disposal. It's simply a matter of getting acquainted with them in a statistics course, keeping a few textbooks at hand, and thumbing through them occasionally for formulas that can help us answer quantitative questions.

Bellesiles claimed, for instance, using evidence from probate records, that only 15...

pdf

Share