publisher colophon
Abstract

For many who are not immersed in the world of the assessment practitioner, “accountability” and “assessment” are perceived as interchangeable terms. They are not.

The New England Educational Assessment Network (NEEAN) Board has been discussing the relationship between these two agendas, and NEEAN’s role in both, as a part of our strategic planning process. We believe that the NEEAN Board and the organization’s members have an important responsibility for helping to shape and inform both the accountability and the student learning assessment conversations, and the institutional actions associated with each. In this reflection, we suggest a framework for these conversations.

The accountability agenda in higher education and, more specifically, accountability for student learning outcomes is receiving a lot of attention these days. Following the Spellings Commission Report in 2006, higher education associations moved quickly to develop accountability systems that would demonstrate responsiveness to external requests for information. For example, the American Association of State Colleges and Universities and the Association of Public and Land-grant Universities designed the Voluntary System of Accountability (VSA), which was followed by the [End Page 137] community college version, the Voluntary Framework of Accountability (VFA), and another system for private colleges and universities, the University and College Accountability Network (U-CAN). In our New England region, the Massachusetts Department of Higher Education is in the process of developing the “Vision Project.” In their own ways, each of these initiatives attempts to develop methods for comparing institutional effectiveness through the use of both student learning outcome metrics and various other data.

In the wake of these and other accountability initiatives, many of us have participated in a variety of conversations about how our campuses should respond to these initiatives and what kind of investment we should make in participating. As an illustration, in a recent conversation a faculty member-cum-academic administrator reviewed the latest information on one of the accountability initiatives. As the discussion moved to some of the reservations individuals had with the particulars of this plan, this administrator sat back with a heavy sigh and said, “You know, it’s not as though I am or our institution is opposed to conducting learning assessments . . . or to being held accountable.” He then asserted that in higher education we conduct all kinds of assessment and went on to describe a curricular reform effort in which he was engaged earlier in his faculty career: His department redesigned one of their gateway courses, changing the way it was taught, the kinds of assignments and activities required, and increasing its academic rigor.

Because this change represented a major institutional and departmental investment, they also conducted a prerevision/postrevision assessment of student performance (using a test instrument widely accepted in the discipline) as well as student feedback about the course experience. Interestingly, students complained about the increased workload. However, the assessment of learning results showed substantial gains in student performance under the new curriculum. These improved results were maintained over subsequent offerings of the course. In addition, the test provided comparisons of students’ performance in similar academic programs at other institutions and these results showed their students’ performance improved relative to the performance of students on other campuses. The results provided the evidence necessary to quell student complaints, bolster administrative and departmental support for the change, and demonstrate the value of systematic evidence of student performance.

So, what in particular is it about this example of student learning assessment that makes it so memorable and compelling to this administrator [End Page 138] (besides the fact that it provided good news to the individuals involved in the curricular reform effort)? Based on what we know about good practice in student learning assessment, we have some suggestions:

  1. 1. It is context specific and embedded in the curriculum: The faculty in the department identified a problem, worked together to develop a potential curricular response, designed an assessment strategy, and identified a set of assessment tools appropriate for the learning objectives of the revised course and the major. In other words, this was not an “off-the-shelf” effort, or one dictated by outside forces unconnected to faculty members’ curricular priorities. Instead, it was tied to the intellectual and pedagogical investments of the faculty.

  2. 2. It is methodologically sound: The research design is appropriate (gathering evidence of a representative sample—in this case nearly 100 percent—of the students affected by the curriculum, with a pre/post-research design appropriate for the research question proposed, and data collection extending beyond the initial implementation period to test for the stability of the results over time). The data-gathering methods are sound (the assessment test selected by the faculty is closely aligned with course purposes and was widely accepted by disciplinary experts, additional information was gathered to provide multiple perspectives on the student experience).

  3. 3. It informs action: The assessment was used to test whether an innovation had the intended consequences, and the results were used to make judgments about whether to continue the curricular practice or consider another type of action that might be more effective. In this case, the results confirmed the effectiveness of the intervention and provided the affirmation needed to support continued implementation.

  4. 4. It breeds enthusiasm for assessing student learning: The results helped shape others’ understanding of the new course design and its value to the department and, arguably, to the campus as a whole. In this way, it demonstrated the value of evidence to inform curricular innovation and the possibilities for assessing student learning. Just as an example, all these years later, this administrator still lights up when talking about the assessment project with which he was involved.

These four elements represent many of the fundamental elements of effective student learning assessment practices. While each of us could offer some examples of this kind of assessment effort on our own campuses, [End Page 139] we all agree we would like to see (and would like campus leadership to encourage) more of this kind of activity—at the course, program (general education and major), and institutional level. It is—dare we say it?—the kind of performance to which we think we should be held accountable.

The majority of the current accountability initiatives emphasize priorities other than this kind of assessment, focusing instead on the capacity to compare student outcomes, learning and otherwise, across higher education institutions. By necessity, these plans emphasize (1) learning objectives and performance criteria that are decontextualized (in other words, disassociated from the major, the specific mission and learning objectives of a campus, and the particular emphases of a given general education curriculum); (2) outcomes and performance standards that are basic enough to “represent” a wide range of institutions; (3) data-gathering methodologies and reporting techniques that emphasize a simplified and distilled representation of student performance to facilitate comparisons; and (4) assessment decisions and processes that are often distant from the classroom and the day-to-day involvement of faculty. While not necessarily the intent, these initiatives foster assessment processes unconcerned with the realities of campus-based pedagogies, curricula, and faculty commitments. Applying the results of these accountability exercises to inform campus-based decisions and curricular design is often difficult if not impossible.

While these accountability priorities appear at cross purposes to the four characteristics of effective assessment outlined earlier, one might argue that these deviations from assessment good practice are worth the price in order to respond to calls for accountability.

But what if the problem of decontextualizing assessment isn’t just a barrier to being able to use the results for internal improvements? What if this decontextualization isn’t possible in the first place? What if it simply doesn’t work? Some of the experiences and evidence gathered from the first wave of participants suggest that making the choice of “damn the torpedoes, full speed ahead” with these accountability frameworks might not be either methodologically or substantively responsible (see, for example, AAC&U 2010; Borden and Young 2008; Haswell 2012; Hosch 2012 Stassen, Herrington, and Henderson 2011; University of Cincinnati 2011).

As we enter the next stage in developing statewide and national accountability systems, we now have the opportunity and responsibility to review the results emerging from the current efforts and those campuses that have served as pilot implementers. These results raise some significant questions. [End Page 140] Thankfully, there are a number of pioneers in the national associations and on partner campuses working to surmount the methodological and research design challenges associated with this effort.

It is important that we in higher education ask ourselves what are our accountability standards? Will we be satisfied with a focus on the lowest common denominator or a set of generic skills defined narrowly enough to be measured and compared across a variety of institutional and disciplinary contexts? Would we rather consider as well students’ own diverse talents, motivations, and career aspirations? Will the response “at least it gives us something” (as a state higher education policymaker once said when asked about the methodological soundness of the accountability methods being proposed at that time) be acceptable to us? Will “something” be all right as a representation of our campuses’ effectiveness even if it means going along with approaches that are methodologically unsound or that misrepresent the very higher education purposes and goals they purport to represent?

As an organization committed to building an assessment community, NEEAN supports faculty and administrators in their efforts to conduct student learning assessment that directly informs their own practice and helps them clarify and emphasize what really matters for student learning and development. Through conferences, workshops, and this journal, NEEAN works to advance the very assessment practices outlined above: assessment in context, assessment that is methodologically sound, assessment that leads to and informs action (both action of affirmation and action for change), and assessment that breeds acceptance and even enthusiasm because its usefulness is clear.

We also hope to foster a community that can respond knowledgeably to accountability initiatives as they surface. As this conversation continues, let’s ensure that the first question we address is: accountable for what? We all have more work that we can do in collecting methodologically sound evidence of student performance, using that evidence to inform decision-making, curricular design, and pedagogy, and supporting and rewarding faculty who engage in these kinds of practical assessment efforts. That is an accountability agenda we enthusiastically embrace.

All of us in higher education should continue to consider how to appropriately demonstrate our effectiveness in enhancing student learning and performance. This work is difficult. If it wasn’t, it would have already been accomplished. [End Page 141]

Martha L. A. Stassen
NEEAN President (On Behalf of the NEEAN Executive Board)
Martha L. A. Stassen

Martha L. A. Stassen is Assistant Provost for Assessment and Educational Effectiveness at the University of Massachusetts Amherst. She also serves as president of the New England Educational Assessment Network.

References

AAC&U. 2010. “Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessments to Standardized Tests.” AAC&U News. http://www.aacu.org/aacu_news/aacunews10/april10/feature.cfm (accessed April 19, 2012).
Borden, Victor M. H., and John W. Young. 2008. “Measurement Validity and Accountability for Student Learning.” New Directions for Institutional Research 2008 (S1): 19–37.
Haswell, Richard H. 2012. “Methodologically Adrift.” College Composition and Communication 63 (3): 487–91.
Hosch, Braden J. 2012. “Time on Test, Student Motivation, and Performance on the Collegiate Learning Assessment: Implications for Institutional Accountability.” Journal of Assessment and Institutional Effectiveness 2 (1): 55–76.
Stassen, Martha L. A., Anne Herrington, and Laura Henderson. 2011. “Defining Critical Thinking in Higher Education: Determining Assessment Fit.” To Improve the Academy 30:126–41.
University of Cincinnati. 2011. Cohort V Final Report. National Coalition for Electronic Portfolio Research (NECPR), http://ncepr.org/finalreports/cohort5/UC%20Final%20Report.pdf (accessed April 19, 2012). [End Page 142]

Additional Information

ISSN
2160-6757
Print ISSN
2160-6765
Pages
137-142
Launched on MUSE
2012-12-18
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.