Abstract

The digital humanities tends either to distant-read enormous data sets or to microanalyze the linguistic features of single works. “Big data” projects use software to visualize massive data sets of publishing information containing millions of volumes, revealing historic patterns that would be unobtainable by scholars. The main weakness of big data methodologies is their inability to read the works. The microscopic approach of text mining presents similar benefits and drawbacks. This article finds a middle ground by using these two techniques to read the September 1918 Little Review, examining the combined use of human markup and automated statistical techniques.

pdf

Additional Information

ISSN
2152-9272
Print ISSN
1947-6574
Pages
pp. 110-135
Launched on MUSE
2014-08-13
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.