The digital humanities tends either to distant-read enormous data sets or to microanalyze the linguistic features of single works. “Big data” projects use software to visualize massive data sets of publishing information containing millions of volumes, revealing historic patterns that would be unobtainable by scholars. The main weakness of big data methodologies is their inability to read the works. The microscopic approach of text mining presents similar benefits and drawbacks. This article finds a middle ground by using these two techniques to read the September 1918 Little Review, examining the combined use of human markup and automated statistical techniques.