In lieu of an abstract, here is a brief excerpt of the content:

Reviewed by:
  • University Rankings: Theoretical Basis, Methodology, and Impacts on Global Higher Education ed. by Jung Cheol Shin, Robert Toutkoushian, and Ulrich Teicher
  • Marc Cutright
Jung Cheol Shin, Robert Toutkoushian, and Ulrich Teicher (Eds.). University Rankings: Theoretical Basis, Methodology, and Impacts on Global Higher Education. New York: Springer, 2011. 271 pp. Cloth: $159.00. ISBN: 978-94-007-1115-0.

In November of 2011, I attended a conference in Juba, South Sudan, on the future of higher education in that then-four-month-old nation. In a nation crippled by war, poverty, disease, minimal infrastructure, and low levels of literacy, the development of higher education was nonetheless at the top of many agendas. While my presentation was on strategies such as developing community colleges and technical schools, other presenters emphasized, to enthusiastic audience reception, the importance of developing "world-class universities." Some further contexts: Of six claimed universities, only one, the University of Juba, was fully constructed. That university had not yet opened for the academic year, due to a lack of funds—and students, faculty, buildings, and equipment. The disconnections were, to this observer, surreal.

The world-wide fascination with world-class universities is well documented (e.g., Altbach & Salmi, 2011). But critical analyses of these fascinations, both technical and political, have been fairly scattered and country-specific. University Rankings: Theoretical Basis, Metholology, and Impacts on Global Higher Education is a noble and largely successful effort to address this array of issues on a global basis. Editors Jung Cheol Shin, Robert Toutkoushian, and Ulrich Teichler write from their personal perspectives in South Korea, the United States, and Germany, respectively. But their own scholarship is not restricted to those nations, and the authors that they have recruited for this book consider the issues more broadly.

Noted in brief is the history of university rankings. They are generally acknowledged to have begun with Donald Hughes's 1925 effort to rank graduate programs in the United States based on reputation. Similar or partial efforts were largely the domain of academics themselves until the appearance in 1983 of the U.S. News and World Report's famous, or infamous, ranking of institutions largely aimed at "consumers" of undergraduate education. The editors categorized some 33 identified ranking systems around the world (in 2009) as those maintained by the media, quality assurance agencies, and governments.

Often cited in the volume are some of the more widely noted limitations of many ranking systems: They tend to value quantity over quality. They often focus on input variables (e.g., entering student credentials) rather than outcomes (any measurements of value added). They emphasize research output over any other university functions or objectives. Rankings tend to make unsupported assumptions about the quality of teaching by assuming spill-over from research productivity. And rankings tend to exaggerate "statistically insignificant" differences between and among universities by insistence on pure ranking, like a race, instead of clustering.

But several authors raise more complex and less obvious concerns. Rankings are a one-size-fits-all approach to evaluation that may minimize national or cultural contexts. Rankings may drive nations and higher education systems to concentrate resources in fewer institutions, when other considerations may support greater diffusion. Rankings assume that quality is a standard with universal meanings. And rankings may blind us to other roles for universities and their support.

Editor and author Teichler's chapter on the "social contexts and systemic consequences" of rankings is based largely on in a meta-analysis conducted by several master's degree students under his direction in the international higher education program at the University of Kassel (Germany). Not only were the 50 source documents drawn from different nations, but the student researchers themselves, from six nations, lent multi-national perspectives to their analysis. They identified and described "nine major endemic weaknesses of rankings, among them, with illustrations of the concepts (pp. 63-68):

  1. 1. Weakness of data. Rankings are driven more by available data than ideal data. We settle for what we can get.

  2. 2. Lack of agreement on quality. "Most ranking studies are 'monotheistic' in the belief that there is a single 'god' or quality" (p. 64). Considerations of multiple concepts of quality are rare...

pdf

Share