In lieu of an abstract, here is a brief excerpt of the content:

  • Model Talk: Transparency, Percent Prediction, and the 2018 FiveThirtyEight Election Forecast
  • Taylor Black (bio)

Theatre Journal’s timely special issue on Post-Fact Performance, published in December 2018, brings together a number of fast-paced, volatile subjects, particularly electoral politics and digital-era communication. As such, it is no surprise that even the quickest of academic timelines requires updates. In my essay for the print journal, titled “The Numbers Don’t Lie: Performing Facts and Futures in FiveThirtyEight’s Probabilistic Forecasting,” I examined the US poll aggregator and political forecast news site FiveThirtyEight’s coverage of the 2016 US presidential election through a performance studies lens, focusing particularly on the public experience of data-driven forecasting as predictions of the future with theatrical tropes of soothsaying. One notable contributor to the sense of data journalists as fortune tellers in 2016, evidenced in FiveThirtyEight’s own reflections as well as external analysis, was partisan-motivated over-reading of the percent probability offered by the election model. These concerns are echoed in the postmortem conversations by political and social scientists and other forecasting projects. Editor-in-chief Nate Silver and his team had difficult questions to answer going into the also-contentious 2018 midterm, perhaps most urgent among them was how to dampen expectations and whether it is possible to shift readers’ desires to favor uncertainty.

In their 2018 midterms coverage, particularly the race for control of the House of Representatives and later the Senate, Silver and his team have altered their election forecasting model to directly address some of the coverage questions raised in 2016. Some of these changes reflect a difference in kind, as tracking a large number of congressional races is notably different from covering a presidential race. Other changes, however, are intended to affect reader perceptions of the forecast, and these suggest a possible reopening of the question of how data journalism performs future prediction.

A first visual overview, alone, of the two models demonstrates a significant shift in focus: the model in 2016; the model in 2018.

The principles and overall methodology of the statistical analysis, Silver says in the methodology of the 2018 House forecast, are familiar to previous models. What has [End Page E-4] changed in 2018 is in how readers are encouraged to approach the model, and what kinds of interpretations are made more or less accessible. Importantly, the election forecast remains probabilistic and continues to speculate on future possibilities, so the potential for reading uncertain gestures about future events as a vision of the future appears to remain. The range-based uncertainty of forecasting, as Silver states, is crucial to FiveThirtyEight’s effort “to develop probabilistic estimates that hold up well under real-world conditions.” However, the way it gestures toward future outcomes has changed substantially.

Click for larger view
View full resolution
Figure 1.

FiveThirtyEight’s 2016 presidential election forecast, data as of August 11, 2016.

One of the most misread features of the 2016 model was the “Now-cast,” which forecast “who would win if the election happened tomorrow” from any given point of the data—meaning that the Now-cast performed the most overt act of fortune telling, in that it suggested a percentile outcome based strictly on available data, which generated a confusion much lamented by Silver on Twitter during 2016, in that people read the Now-cast as a forecast. In 2018, this approach has been cut entirely, and the aesthetic alone has notably shifted. Rather than foregrounding a percentage, the primary visual indicator is a probability bar graph. This visualization underscores the scientific nature of data analysis rather than offering the desirable, but more subject-to-interpretation glimpse into the future of the Now-cast’s percentage. This approach also contributes to ongoing questions in data visualization, where the aesthetics of “beautiful” data intersect questions of how to accurately convey information—a challenge of form familiar to artistic debates in general. The 2018 model is offered in three degrees of complexity, what Silver calls “the cheeseburger menu.” Its three variations—“lite,” “classic,” and “deluxe”—vary the amount of non-polling peripheral data points used (for instance, candidates’ fundraising data), further diversifying a method used in [End...


Additional Information

Print ISSN
pp. E-4-E-7
Launched on MUSE
Open Access
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.