In lieu of an abstract, here is a brief excerpt of the content:

Reviewed by:
  • Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved
  • Cecilia Rios-Aguilar and Heather Metcalf
National Academy of Engineering. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press, 2009. 52 pp. Paper: $21.00. ISBN-10: 0-309-13782-9.

Faculty are faced with many demands of their time and resources and must continually prioritize their efforts, especially in light of improved career advancement. At research-intensive institutions in particular, faculty are aware that research contributions are the most important measure with respect to promotion and tenure decisions, while teaching and service are less valued (Baez, 2000; Serow, 2000).

In Developing Metrics for Assessing Engineering Instruction, the National Academy of Engineering (NAE) argues that an increased demand for accountability in higher education has resulted in the need to document the quality and effectiveness of teaching and learning, particularly in engineering. They point out that research institutions award many engineering degrees each year, complicating the balance of faculty responsibilities and making the need to encourage effective teaching in these colleges and departments even greater.

A committee of engineering educators, teaching assessment experts, and leaders in faculty professional development developed this proposed approach for evaluating effective teaching with the goal of “foster[ing] greater acceptance and rewards for faculty efforts to improve their performance of the teaching role that makes up a part of their faculty responsibility” (p. 1).

The committee’s report/book is divided into six sections: (a) background, (b) principles of good metrics, (c) assumptions, (d) what to measure, (e) measuring teacher performance, and (f) recommendations. The first section presents motivations for using metrics to measure teaching effectiveness in engineering, many of them aligned with academic capitalism (Slaughter & Rhoades, 2004). The authors point out that few faculty have formal training in effective teaching. Making evaluation more critical are such developments as the rapid advancement of high-bandwidth technologies, globalization, the increasingly public [End Page 429] nature of engineering, and growing scrutiny and accountability demands on higher education, all of which have impacted engineering education. The authors claim that, to be useful, the metrics must make efficient use of faculty time, have faculty buy-in, and receive continual support from external stakeholders.

The second section outlines eight principles to ensure that the proposed evaluation system will be widely acceptable and sustainable: (a) The evaluation system must mesh with institutional mission, goals, and structure; (b) Deans and department chairs should be central in its development; (c) Faculty should be integrally involved in the creation of the metrics, (d) The evaluation system should reflect the complexity of teaching, (e) Participants must be in consensus on fundamental elements of effective teaching, (f) Teaching evaluations should include formative feedback and summative evaluation, (g) The evaluation system must be flexible, and (h) Evaluations should use multiple data sources and methods at multiple time points.

The third section spells out the report’s underlying assumptions: (a) A well-developed and meaningful mechanism for evaluating teaching effectiveness will improve teaching and learning, (b) All faculty members are able to improve their teaching, (c) Many faculty members are intrinsically motivated to improve their teaching, and (d) Administrators will use data collected from the evaluations to make fair and accurate judgments. They note that, for the metrics and evaluation system to work, faculty must be able to trust the administration.

The fourth section details what aspects of teaching should be measured. Determining what to measure, the report argues, depends on the values held by educational institutions and the faculty within them. They claim that discerning the institutional values and applying them consistently in the evaluation process controls the subjectivity always inherent in evaluation.

Some values can be found in the “weights” assigned to faculty roles. However, the report suggests that, to better reflect the complexity of faculty work assignments, these weights can be expressed as ranges rather than fixed values. The report proposes measuring five skills: content expertise, instructional design, instructional delivery, instructional assessment, and course management. Pedagogy, while closely related to the five proposed skills, is not a suggested area for evaluation despite its significance to teaching effectiveness and excellence.

The fifth...

pdf

Share