One of the defining characteristics of current U.S. educational policy at all levels is a focus on using evidence, or data, to inform decisions about institutional and educator quality, budgetary decisions, and what and how to teach students. This approach is often viewed as a corrective to the way that teachers have made decisions in the past—on the basis of less reliable information sources such as anecdote or intuition—and is seen by advocates as a core feature of successful educational reform (Mandinach, 2012). Underlying [End Page 391] the current push for data driven decision-making (hereafter DDDM) is the idea of continuous improvement, which refers to systems that are designed to continually monitor organizational processes in order to identify problems and then enact corrective measures (Bhuiyan & Baghel, 2005). In education this model has been widely adopted and is often associated with large data-sets that are analyzed with sophisticated algorithms to identify which states, districts, and schools are succeeding or failing according to federal and state accountability criteria (Darling-Hammond, 2010).
Yet research on data use in K-12 settings has demonstrated that the provision of data alone does not magically lead to improved teaching and learning. This is because DDDM is not simply a matter of giving educators data reports, but one that involves translating these data into information and actionable knowledge that administrators and teachers can apply to current and future problems (Spillane, 2012). Imagine a principal and group of teachers struggling to understand precisely what voluminous amounts of student achievement data reports mean in terms of student advising, curriculum change, and classroom teaching. Each person will necessarily interpret the data through their own unique perspectives and experiences. Additionally, their situation within a particular school or institution will also influence how they interact with data, including the social networks, cultural norms, artifacts (i.e., designed objects), policies and procedures, and practices that collectively shape how people think and act within complex organizations (Coburn & Turner, 2011; Halverson, Grigg, Prichett, & Thomas, 2007).
Such insights into the processes of sense-making as a situated phenomenon have led to a growing body of research on data use in K-12 contexts known as “practice-based research,” which focuses on how educators actually think, make decisions, and work in specific situations rather than on describing the effects of interventions or prescribing best practices (Coburn & Turner, 2012; Little, 2012). In seeking to understand the impacts of the environment on data practices, this line of inquiry emphasizes the cultural aspects of data use, where educators engage in routinized practices with colleagues while using shared language and tools to conduct their work (Spillane, 2012). Given documented challenges with the effective institution of DDDM in schools, particularly at the classroom level, such insights can be an important tool to improve interventions by ensuring that they are aligned with or responsive to the norms and practices of specific organizations, as opposed to a “top-down” approach that is a far less effective approach to reform (Fullan, 2010; Mandinach, 2012; Spillane, Halverson & Diamond, 2001).
What does this all mean for higher education? Policymakers and post-secondary leaders are devoting considerable efforts towards introducing a “culture of evidence” to higher education that is not dissimilar to the data-based accountability movement in K-12 education (Morest, 2009). This is evident in performance-based funding (Hillman, Tandberg & Gross, 2014), [End Page 392] institutional rating systems (Kelchen, 2014), and the increasing use of data mining and analytics (Lane, 2014; Picciano, 2012). At the classroom level, some argue that the use of predictive modeling can improve teaching and learning through learning analytics, which is seen as an evidence-based way to tailor instruction to student needs and to generally improve faculty1 decision-making (Baepler & Murdoch, 2010; Wright, McKay, Hershock, Miller, & Tritz, 2014). Taken together, these developments indicate that higher education has entered an accountability phase not unlike that in the K–12 sector at the beginning of the 1990s.
Thus, a pressing question facing higher education is whether the lessons learned...