This article discusses the formidable challenges that the advent of big data brings to the digital humanities broadly and proposes some ways the Korean studies community can prepare to navigate these uncharted waters. Standard digital humanities training in data mining, text analysis, mapping, network science, and machine learning will be developed and refined over the coming years, as will research concerning the ephemeral nature of new media, web archives, and the ethics of artificial intelligence. Yet I contend that established responses to the digital transformation of the humanities, while timely and necessary, will prove inadequate for handling petabyte- and exabyte-scale born-digital sources. In the Zettabyte Era, more data is processed in real time than all of the records produced from early times to the 2010s. To make sense of the current information regime, we need critical reflections and comparisons to the classical internet age of the 1990s, the personal computer revolution of the 1980s, and early modern print cultures. This exercise will allow us to situate the humanities in an age of big data as an extension of traditional humanities research and at the same as something foreign.