In lieu of an abstract, here is a brief excerpt of the content:

72 The Rain Man Cometh—Again 5 For colleges and universities October has traditionally been a tough month—growing darkness, impending rain and cold, the creeping realization that the football team won’t win that many games, and, to make matters worse, an opportunity for really bad news. It was on an October day in 1987 that the stock market experimented with free-fall, jangling the nerves of every institution whose endowment included equity holdings. Two years later, on Friday the 13th of October, the market flirted with a similar decline. For residents of the San Francisco Bay area, October is now marked as the month of the “Little Big One,” the seismographic event that shook all of our foundations. And it was October 2007 when fires scorched Southern California. In October 2008 colleges and universities were reminded again that the stock market taketh as well as giveth. In the 1980s and 1990s, October was also the month of reckoning for higher education. October 16, 1989—Black Monday—which that year came neatly book-ended by the market plunge and the earthquake—marked the publication by U.S. News and World Report of its annual rankings of institutions. Under the soft-sell title, “America’s Best Colleges” and in the breathless prose of a Sunday supplement, U.S. News again offered up a collegiate telling of who’s in, who’s out, who’s hot, and who’s not. 73 THE RAIN MAN COMETH—AGAIN A True Phenom Now the rankings are an American icon. Though, as it turned out, there were myriad schemes for comparing and judging American colleges and universities, all you have to say today is the “rankings” word or, as is often the case, the “dreaded rankings,” and everyone immediately understands you mean the U.S. News rankings. By the time the U.S. News collegiate rankings had celebrated its twenty-fifth anniversary in 2007, the enterprise had spawned a veritable tribe of rankings that told Americans about the best law schools and graduate schools and medical schools and more. Not surprisingly the rankings have become the subject of extended analyses, most of which are designed either to discredit them or to discover what they truly measure. As every right-thinking academic knows, the rankings cannot possibly measure what they propose to measure—that immeasurable quantity, academic quality. Then, as now, there were three basic ways to attack the rankings. The easiest was to point out that the rankings were silly; the numbers were arithmetically precise but largely without meaning. U.S. News had started with a clever idea: ask college and university presidents to list what they thought were the best institutions for an undergraduate education. Even if the results of what became known as the beauty contest reflected a bias in the magazine’s choice of presidents , then most knowledgeable observers were still intrigued by the outcome. Although there were some notable as well as curious omissions , few readers doubted that those at the top of the presidents’ list belonged there. It may have been gossip, but good gossip sold a lot of magazines. The problem was that the losers in these early polls wouldn’t accept the result. With unexpected force—after all, U.S. News and World Report was a not particularly important magazine—those slighted by the poll argued that the rankings were too simplistic, too much a product of fading reputations and old-school networks. U.S. News responded with science. Starting in 1989, the annual rankings issue included, in addition to the results of the beauty contest, a variety of statistics the editors presented as objective measures of institutional quality. At this point, things really got murky. Most measures reflected educational inputs rather than outputs. How selective was the institution? What was the average SAT/ACT of the freshmen class? What was the student/faculty ratio? How much money did the institution have to spend on undergraduate education? Many within and a [3.147.72.11] Project MUSE (2024-04-25 19:56 GMT) few without higher education asked what had happened to that oldfashioned notion that education quality meant good teaching, engaged faculty, and industrious students. There were also problems with the statistics themselves. Some numbers, it turned out, counted for more than others, though the reader was never told exactly how much more or why. Some institutional resources were counted twice, first as revenue and then as expense. Other revenues did not count at all—tuitions, for...

Share