publisher colophon

Assessment versus Innovation

Cathy Davidson

Most of us think that the current emphasis on assessment is a contemporary phenomenon. In fact, the rationale for testing, grading, assessing, and evaluating in a quantified fashion goes straight back to the dawn of the assembly line and the modern office; back to the beginning of education schools and business schools. If you look at most educational institutions, corporate HR departments, and government agencies today, they have adopted forms of evaluation that bear the legacy of methods designed in the early twentieth century to make evaluating the quality of people and their work as easy as inspecting a Model T as it rolls off the assembly line. The byword of the Model T is that you can have it in any color so long as its black. One size fits all. We're still judging as if we're trying to ensure that uniform, efficient sizing up of human achievement, accomplishment, effort, and productivity.

The world has changed in the last two decades, but evaluation methods have not. We have entered a new era of distributed customizable knowledge, where tasks are shared and accomplishments are iterative—in the sense that others can emend the result, that improvement is continual, and participation is the desired goal. That's how the Internet was built, and how the Firefox browser and Apache server are both sustained and maintained. Yet our prevailing methods of assessment presume nothing has changed since Ford rolled out his first automobiles, and that the goal is exactly, precisely the Model T.

More and more assessment is detached from the standard of excellence it is supposed to measure in some productive way. Because of the growing mismatch between the ways we work and learn today and the antiquated—and increasingly rigid—forms of assessment to which we subject ourselves and others, it's time for a major rethinking. At my workplace, I am required to provide an assessment of those I supervise. That's fine. But I'm also required to rank them. Since I spend the year working hard—we all do—to improve how we work together as a collaborative team, I can think of nothing more harmful to what we accomplish together than saying Person 1 is better than Person 2. That method of assessment undermines the efficiency and excellence of the team. It is also arbitrary. If I am a truly good supervisor working throughout the year to ensure that each person performs not only to his or her potential, but to the specific requirements of his or her job, I am exactly not trying to encourage my teammates to compete against one another but to, together, strive for excellence. If one member is not performing to full potential, it should be my job to say where improvement is needed, and what the path to that improvement is. It's not even relevant to specify that he or she happens not to be as good as Sarah or Johnny: that's not aiming high enough. It's simply aiming relative to our small group. That comparison happens to be gratuitous and arbitrary, relevant not to his or her job, but to who happens to work around him or her. It is destructive of management goals that, as a supervisor, I set and aspire to throughout the year.

I recently spent time with a British scholar who noted that the new government promotion and salary guidelines require that she produce four refereed articles a year. Why four? Two great ones don't mean more than four that may not be great? That's how we measure intellectual productivity? One refereed book does not count? This is a standard that is harmful to the sciences, since it says publication of those four works a year is more important than the major scientific find that might result in one hugely influential and important article in due time—not four turned out to someone else's measurement. But in her field of film studies, where a book has been long deemed more important than articles, it also means an arbitrary application of someone else's arbitrary standard to her field. It undercuts excellence in all fields.

More and more of us experience such discrepancies. The rigidity of contemporary assessment may well turn out to be a death knell. Practices often become more stringently enforced when they no longer have real utility and before they are about to be transformed or discarded. In the meantime, many of us are stuck with assessment methods that inhibit excellence, impede creativity, and serve as the antithesis to innovation. The measure may well be simple and efficient. The tragedy is that, in many cases, we have reached a binary: assessment versus innovation.

Additional Information

Related ISBN
MARC Record
Launched on MUSE
Open Access
Creative Commons
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.