publisher colophon
abstract

In this study, results are presented from a rigorous content analysis of responses to two open-ended questions included in the Administrators’ Survey of Assessment Culture. A sample of 302 US higher education administrators provided 566 narrative responses addressing (1) the primary reason they conducted assessment on campus, and (2) how they would characterize their campus assessment cultures. Analysis revealed two meta-themes: “Institutional Structures,” including procedures, data usage, and accountability; and “Organizational Culture,” administrators’ descriptions of rituals, artifacts, discourse, values, and change related to assessment. Implications are shared for reframing and cultivating notions of institutional cultures of assessment.

Keywords

Assessment culture, institutional assessment, administrators, culture of assessment

According to William Tierney, a professor at the University of Southern California, “An organization’s culture is reflected in what is done, how it is done, and who is involved in doing it” (Tierney 2008, 24). Culture is primarily understood by examining concepts such as how messages are communicated, what symbols are shared, and how organizational leaders approach processes, procedures, and policies. By identifying and understanding underlying cultural behaviors in higher education organizations, [End Page 1] leaders can engage in improved decision-making about key practices, including assessment.

As costs rise for a college degree, institutions of higher education are under increased pressure to conduct assessments that meet accreditation demands, satisfy constituents, and improve practices benefiting student learning (Burke 2004; Ewell 2004; Hartle 2012). Yet, the act of conducting assessment can be met with resistance, especially by faculty (Heinerichs, Bernotsky, and Rieser Danner 2015; Maki 2010; Suskie 2010; Young, Cartwright, and Rudy 2014). To increase satisfaction with the assessment process, McCullough and Jones (2014) examined faculty perceptions of assessment at one institution. The results of their study revealed that while administrative leaders believed they were providing faculty with the appropriate resources and support to conduct assessment, faculty were not always aware of the available resources. These faculty members also reported feeling inadequately prepared to perform assessment responsibilities and were dissatisfied with assessment, because they were not rewarded appropriately for their time and effort.

For many years researchers and practitioners have promoted the benefits of cultivating and sustaining institutional cultures of assessment as essential to ensuring optimum student learning in US colleges and universities (Burke 2004; Ewell 2004; Ndoye and Parker 2010; Weiner 2009). Other scholars have highlighted barriers to establishing such a culture. For instance, the impetus for conducting assessments might stem from accrediting body or state government mandates, thereby increasing skepticism regarding purpose (Astin and antonio 2013; Burke 2004; Driscoll, de Noriega, and Ramaley 2006). Accreditation—rather than the improvement of student learning—is often maintained as the primary purpose of assessment (Fuller and Skidmore 2014; Gaston 2013; Haviland 2014; Maki 2010). Further complicating consensus about ideas of cultures of assessment is a lack of scholarly foundation and understanding about what a culture of assessment actually entails and how it is maintained or augmented (Douchy et al. 2007; Haviland 2014). Within a context of competing purposes for assessment, institutional leaders often are required to change or maintain an institutional culture, yet have limited scholarship to guide them in accomplishing this task (Fuller, Henderson, and Bustmante 2015; Haviland 2014).

The purpose of this study was to examine the narrative responses to two open-ended questions on the 21-item instrument, the Administrators’ Survey of Assessment Culture (Fuller 2011). Although most of the items on [End Page 2] this instrument were designed to elicit respondents’ perspectives of the purpose of assessment at their institutions, some researchers encourage the use of open-ended questions in order to gather contextual meaning and assist with the identification of relevant issues that might not be captured in a quantitative format (O’Cathain and Thomas 2004; Swanson, Watkins, and Marsick 1997). This study focuses on addressing an overarching research question: How do US higher education administrators describe cultures of assessment? Using classic content analysis methods, data were examined to aid in the development of a framework for further theorizing about cultures of assessment that might inform higher education leadership practices and future studies.

Conceptual Frameworks

This qualitative analysis of narrative survey data was informed by two conceptual frameworks: Maki’s (2010) definition of institutional culture of assessment and Schein’s (2010) definition of organizational culture. Maki’s principles of inclusive commitment included definitions of a culture of assessment that addressed many facets including engaging colleagues in sustained dialogues about student learning; basing assessment on institutional values; sustaining assessment through structural processes and resources; creating partnerships across campus boundaries; and designing an environment of continuous improvement. Maki’s principles also were considered along with scholarship from other researchers (Astin and antonio 2013; Courts and McInnery 1993; Harvey and Knight 1996).

In addition to Maki’s work, we reviewed Schein’s (2010) organizational culture framework for guidance on how to classify individual comments for cultural elements. Schein identified three levels of culture—artifacts, espoused beliefs and values, and basic underlying assumptions. The observable elements of a culture are considered artifacts. These include the physical space of an organization, the stories that are shared, the language that is used, and the rituals and ceremonies that can be observed. The other two elements of culture are less tangible and can be gleaned through how individuals describe their organization. For instance, espoused beliefs and values may be confirmed across several group experiences rather than through observation of a particular phenomenon. When these experiences are identified and have a high level of consensus among members, they can be considered basic underlying assumptions and are typically very difficult [End Page 3] to change. When used together, Maki and Schein’s theories provided us with viable frameworks for guiding our identification of assumptions, beliefs, artifacts, language, norms, behaviors, and other characteristics of organizational cultures and institutional assessment practices in higher education reported by the survey respondents.

Method

The research described here reflects the analysis and interpretation of narrative responses provided by a stratified random sample of US higher education administrators who responded to two open-ended questions included in the Administrators’ Survey of Assessment Culture. In organizational research, qualitative text data solicited from open-ended survey responses can provide detailed information about a topic or serve to explain or clarify quantitative data (Sproull 1988). Moreover, the use of open-ended questions in surveys serves as an effective method of further defining the context provided by respondents and clarifying nuanced ideas that could not be fully captured within the quantitative questionnaire format (O’Cathain and Thomas 2004; Swanson, Watkins, and Marsick 1997). Narrative comments can enhance researchers’ and administrators’ understanding of the phenomena being examined (cultures of assessment), as well as identify challenges and offer opportunities for improvement of organizational polices and processes (Chambers and Chiang 2012).

Sample

The national randomized sample for this study included 566 administrators from US higher education institutions who had responsibilities for institutional research and assessment. The sample of administrator respondents was selected through a rigorous process. First, a listing of all institutions of higher learning was obtained through the Carnegie Classification system. Next, those institutions that offered associate’s, bachelor’s, and graduate degrees were selected for inclusion in the study. This sampling decision reflected our intent to focus on degree-granting institutions of higher education and is supported by prior research (Kuh and Ikenberry 2009; Kuh et al. 2014). The sample was then stratified to focus efforts on attracting a representative sample of institutional leaders from different geographic regions of the United States, from associate’s degree–granting and bachelor’s or higher degree–granting institutions, and from not-for-profit and for-profit [End Page 4] institutions. A total of 424 administrators responded to the overall survey, with 302 providing comments for one or both of the open-ended questions. Of the respondents, 94% considered themselves to be primarily administrators and 6% primarily faculty. Females comprised 56% of the sample and males 44%. Of the respondents, 50% held doctoral degrees, 49% possessed master’s degrees, and approximately 1% had bachelor’s degrees.

Instrument and Open-Ended Items

As described above, the open-ended items analyzed for this study were included as part of a larger, predominantly quantitative, mixed-item administrator questionnaire called the Survey of Assessment Culture. Specific information regarding the psychometric properties of the survey can be read in Fuller and Skidmore 2014. In this article, we only share the results of the narrative responses to the two open-ended questions on the survey.

One of the quantitative questions on the Survey of Assessment Culture asked respondents to complete the sentence, “is the primary reason assessment is conducted on our campus.” Instructions indicated that respondents should select the single most appropriate option from a list of choices that included accountability (12%), accreditation (32%), compliance with governmental mandates (1%), improving student learning (47%), tradition (0%), access to financial resources (0%), and other (7%). A qualitative follow-up question asked respondents to describe how they knew their campus’s culture was focused on the primary purpose of assessment they had selected. The final question on the survey asked a similar question: “Having nearly completed the survey, what else would you like to say about your campus’ culture of assessment?” We analyzed responses to both questions in an effort to gain a thorough understanding of how administrators viewed their institutional cultures of assessment and why they held these perceptions.

Procedures for Data Analysis and Interpretation

We used a classical content analysis approach (Berelson 1952; Krippendorff 1980; Smith 2000) to examine the large amount of narrative data collected. A rigorous process was followed that involved several steps over a three-month period. First, we unitized all narrative text data into smaller chunks (or phrases) of meaning. The final number of unitized phrases for analysis totaled 588 for the first question regarding the explanation of campus [End Page 5] assessment culture and 504 total phrases for the second question concerning any final thoughts on the institutional culture of assessment. These were later revised to 566 and 441 total phrases due to reconsideration of the original unitizing decisions based on the content analysis process.

Each meaning phrase was placed on a small numbered card. Next, we (three researchers) met to collectively sort and code each card into seemingly related categories through open coding (Strauss and Corbin 1990). We collaboratively coded the data with the notion that “multiple minds bring multiple ways of analyzing and interpreting the data” (Saldana 2009, 27) and to bring about shared interpretation and understanding of the phenomena being studied. Two cycles of coding were carried out for all narrative data responses to each of the two survey questions. In the first cycle of coding, 46 initial descriptive and process codes were generated for question 1 and 41 for question 2. In the second coding cycle, we collapsed and grouped initial codes into larger categories. Also during the second cycle, we each read through the grouped codes to ensure similarity in meaning and to identify any units or first-cycle codes that might not belong. Final codes were further reviewed to determine intercoder agreement through extensive discussion and consensus-building among us as researchers. These codes were interpreted to represent larger theme categories. Patterns emerged between themes that indicated a primary focus on technical activities around assessment, as well as themes that reflected organizational culture phenomena. Although knowledge of Schein’s conceptual framework of organizational culture might have influenced our interpretation of data during the analysis process, we did not intentionally apply a priori coding, nor was a codebook developed. We attempted to conduct the analysis using a completely iterative, open-coding process. Throughout the entire analysis and interpretation process, we maintained an audit trail, kept analytic memos, and frequently checked for intercoder agreement.

Researchers’ Subjectivity

Our research team consisted of terminally degreed scholars in higher education administration, organizational behavior, assessment, and research methods. We have served as higher education faculty, researchers, and/or administrators for a combined sixty-two years. Each of us has had program-, college-, or institution-level assessment responsibilities at some point throughout our careers in higher education. At the onset of this study, we shared our belief systems and found they represent various paradigmatic [End Page 6] perspectives. Overall, we tend to view organizational contexts and issues through cultural lenses—specifically, cultural relativism and diversity of thought. Our research paradigms include preferences for qualitative, quantitative, and mixed-methods, while our belief systems are guided by constructivist, interpretivist, pragmatic, and positivist paradigms. As a research team, this mix of paradigmatic approaches allowed us to integrate methodological strengths and increased the legitimization of the design and interpretations of results (Onwuegbuzie and Johnson 2006).

Results

Several rounds of analysis and interpretation revealed two primary meta-themes: “Organizational Structure” and “Organizational Culture.” The meta-theme of Organizational Structure was defined as functions related to the technical aspects of organizational operations and reflected 71% of the narrative responses (403 comments). Themes categorized under the Structure meta-theme included processes and procedures, accountability, and data usage.

The meta-theme of Organizational Culture consisted of seminally defined elements of culture as commonly observed in scholarly literature from the social sciences of anthropology, sociology, social psychology, and management sciences. For example, themes reflected culture components such as traditions and rituals, artifacts, discourse, and values (e.g., of assessment and student learning). This meta-theme encompassed approximately 29% of the interpreted narrative (163 comments). The meta-theme of Organizational Culture also included notions of organizational change as described by several of the respondents who identified their organizational cultures as shifting or adjusting to integrate student-learning assessment into their practices.

Organizational Structure

In explaining their institution’s primary focus for assessment, three themes emerged: “Presence of Accountability,” “Procedures and Processes,” and “What We Do with Data.” Table 1 provides an overview of the themes categorized under the meta-theme of Organizational Structure as well as the constructed meaning, sample codes, sample quotes, and number of comments offered in each theme. [End Page 7]

Table 1. Description of Emergent Meta-theme 1 and Subthemes: Evidence for Culture of Assessment
Click for larger view
View full resolution
Table 1.

Description of Emergent Meta-theme 1 and Subthemes: Evidence for Culture of Assessment

[End Page 8]

The theme of Presence of Accountability comprised almost half (48%) of the comments in the meta-theme of Organizational Structure, and one specific idea—national and regional accreditation—explained the institutions’ focus on assessment (118 comments). The following sample of comments illustrates that accreditation is often what ignites an institutional assessment effort:

“Discussion limited only to very few top administrators, and the reason for doing assessment is always SACS even though lip service is given to student success.”

“Without the accreditation ‘stick,’ we would have a difficult time convincing each program and unit to do assessment.”

“People usually want to know what our accreditor would think about our process and results rather than asking how useful and informative the process is.”

“Without accreditation, our assessment would be less formal and less frequent.”

“Focus on assessment waxes and wanes with each accreditation cycle.”

Other ideas included in this theme were practices focused on the internal accountability requirements not aligned with an accreditation process and the need for some type of external recognition of institutional effectiveness.

The second theme, Procedures and Processes, accounted for 39% of the comments in the meta-theme. There was not one dominant idea, however. Respondents pointed to several concrete activities that helped to explain their institutional focus on assessment such as student learning and program outcomes, established assessment processes, official leaders/offices/teams dedicated to assessment, and formal practices such as training for and documentation of assessment. As one respondent noted, “We are strong on philosophy, overall planning and mapping. But we get too complex at times and don’t always operationalize the plans well, so sometimes the results don’t feed back into the change process as well as I would like.”

The final theme, What We Do with Data, covering 13% of the comments in the meta-theme, described assessment as linked to the information gathered as part of the process. Some respondents pointed to data collection and tracking methods, while others highlighted how data were used and shared. As one respondent explained, “We link assessment to practice through robust executive summaries that utilize a standardized easy to read format.” [End Page 9]

Organizational Culture

While fewer in number, comments concerning intangible elements of the assessment environment aligned with established definitions of organizational culture in general (Alvesson 2013; Cameron and Quinn 2011; Keyton 2011; Schein 2010) and higher education specifically (see Kezar 2011; Manning 2013; Tierney 2008). Five themes emerged to describe these cultural elements: “Values of Assessment,” “Discourse,” “Culture Descriptors,” “Value of Student Learning,” and “Traditions/Rituals/Symbols.” Table 2 offers details of the organizational culture themes including the constructed meaning, sample quotes, and number of comments offered in that theme.

The Values of Assessment theme comprised 26% of the narratives. Respondents perceived a range of values about assessment at their institutions that included statements that assessment was “valued,” “worthless,” “challenging,” and “important.” One respondent described a positive environment: “The campus culture of assessment is grounded in what matters—learning, teaching, civic engagement, innovation, experience, diversity.” Another respondent shared how external influences can help create a more negative environment: “But the formulaic approach pushed by accrediting agencies has been a real challenge for us, leading to some degree of skepticism and resistance.”

The Discourse theme, accounting for 22% of the narratives, covered the stated and assumed messages communicated about assessment that influenced how respondents explained their institution’s primary focus on assessment. Several respondents cited the actual language and rhetoric used by campus leaders to describe assessment efforts. For instance, one respondent explained: “Shift[ing] the focus to student learning engages faculty in a manner that using the word ‘assessment,’ which is laden with surplus meaning, does not.” Other respondents felt a more intangible message being communicated through the demonstrated actions: “The rest of the mandated (and less meaningful) ‘assessment’ is pawned off on our staff office to conduct and report—this sort of ‘assessment’ is not a culture of assessment, it is an avoidance of assessment.” The theme of Discourse was also addressed by pointing to how campus constituents were communicating about assessment: “Increasing instances of faculty & administrators talking about assessment for purpose of improving learning in multiple venues & multiple locations throughout the institution.”

Several respondents wanted to share how their assessment culture was evolving, as demonstrated in the theme of Culture Descriptors, which represented 22% of the narratives in the theme. These comments were mostly [End Page 10]

Table 2. Description of Emergent Meta-theme 2 and Subthemes: Evidence of Culture of Assessment
Click for larger view
View full resolution
Table 2.

Description of Emergent Meta-theme 2 and Subthemes: Evidence of Culture of Assessment

[End Page 12]

positive in nature and highlighted the changes occurring at institutions regarding assessment. One respondent shared institutional efforts to influence a cultural shift: “We are undergoing a long-term effort to achieve a ‘culture’ of assessment on campus where it is recognized by everyone (not just administrators) as a way to improve student learning.” Another respondent made observations of how changes were permeating institutional practices: “Assessment has always been conducted towards improvement. What is different now is that across campus most have a better understanding of why and how to perform assessment after the campus made [a] deliberate commitment towards assessment.” Yet, not all of the respondents held positive feelings about the future of assessment on their campuses. As one respondent explained, “Like any campus, we are fighting the campus culture to achieve the goals associated with assessment.”

The Value of Student Learning theme (15% of the comments related to Organizational Culture) identified several beliefs espoused on campus about student learning. Respondents described an optimistic view of student learning, saying it was valued and important as part of the assessment process. Several comments addressed how faculty support student learning, perhaps over other reasons for assessment. As one respondent explained, “Faculty here do not care for any kind of bureaucratic or accreditation efforts. But, they genuinely care about our students and will engage readily in efforts to improve learning.”

The final theme, Traditions/Rituals/Symbols, comprised 15% of the comments for Organizational Culture. Respondents described rituals, assessment tools that symbolized institutional efforts such as special software and technology, and artifacts such as student portfolios as evidence of their institutional focus on assessment. As one respondent highlighted, “We host an annual day-long Symposium that brings together faculty, staff and upper administrators to share, vet and bring attention to assessment findings and data (and effectively breaking down silos on campus).” Another respondent explained, “We have a plan-do-check-act process that involves assessment plans (plan, do) and then assessment reports (check, act) and the cycle is completed annually.”

Additional Comments on the Campus Culture of Assessment

Another open-ended question near the conclusion of the survey allowed participants to share anything else they wanted to communicate about their campus’ culture of assessment. A total of 212 respondents offered [End Page 13] 441 narratives. Mirroring the themes identified earlier, the largest category of comments concerned organizational structure followed by many of the same intangible aspects of organizational culture that help define it. The most frequent theme emphasized the message from respondents that their campus culture of assessment was improving (67 comments).

The primary difference in this set of comments was the emphasis on faculty perceptions and attitudes and how faculty can impact cultures of assessment. For instance, some respondents believed faculty accepted and valued assessment while others highlighted how faculty can challenge and question assessment practices. Many of the observations detailed negative or ambivalent attitudes exhibited by the faculty. One respondent noted:

Too many faculty members are fearful that the whole assessment movement will expose them for being ineffective. They fear a process that might expose a need to do things differently; one that might suggest they make changes. It’s not unlike the research peer review process. Most people can’t handle the criticism, so they choose not to publish.

Another respondent shared, “Mostly, our faculty want to be left alone to run their courses/departments the way they think is best.” Some of the other issues highlighted by respondents included having an abundance of part-time faculty who did not have time or motivation to participate, the vast array of other responsibilities for the full-time faculty that provided distraction away from assessment needs, and the lack of proper incentives to encourage faculty to embrace assessment and contribute to a positive assessment culture. Table 3 contains themes identified by the content analysis of this final question as well as the constructed meaning, sample quotes, and the number of comments offered.

Implications

Based on the findings of this study, the two primary characteristics of institutional cultures of assessment that emerged from interpretations of the narrative comments centered on either compliance or student learning. The emergence of these characteristics was not particularly surprising considering that much of the rhetoric and focus related to cultures of assessment emphasizes notions of student learning and how to assess it, as well as [End Page 14]

Table 3. Description of Themes for Additional Comments: Evidence of Culture of Assessment
Click for larger view
View full resolution
Table 3.

Description of Themes for Additional Comments: Evidence of Culture of Assessment

[End Page 16]

the importance of meeting compliance requirements. Whereas, elements of Maki’s (2010) principles for true cultures of assessment were evident, such as having institutional values that support assessment, most of the respondents’ narratives focused on assessment as an accreditation-driven requirement or as an externally imposed standard of institutional performance. Even when specific organizational structures and processes were mentioned, they primarily were described as part of accreditation requirements rather than as established cultural norms. Although the notion of fear-driven cultures has been applied in characterizing assessment in some institutions, a focus on compliance rather than fear was more evident in this study. Administrators’ responses reflected a sense of resignation and obligation about conducting institutional assessments rather than fear of consequences for not conducting assessments or revealing negative results.

Overall, the administrator responses clearly were more focused on structural aspects of organizational functioning around assessment rather than cultural processes. Namely, respondents made more references to procedures, job roles, policies, and data systems related to their assessment practices. Although the respondents frequently named accreditation-related artifacts such as documents or reports as evidence of assessment culture, fewer respondents described assessments related to student learning such as portfolios or displays of research posters. Related to elements of organizational culture, respondents also described language, messages, and anecdotes that reiterated espoused values of institutional leaders who emphasized the importance of assessment requirements. These espoused values were described as rhetoric by the majority of the respondents, who generally did not believe this rhetoric reflected a true focus on student learning. Less frequently mentioned were the beliefs, values, and underlying assumptions that provide the foundation for their institutions’ organizational culture or approach to assessment. When underlying elements of organizational culture were described by the respondents, they typically were articulated as the rhetoric of institutional leaders. Scholars of organizational culture would agree that underlying beliefs and assumptions are the most difficult to measure (Alvesson 2013; Cameron and Quinn 2011; Schein 2010).

Although observable features such as artifacts, processes, procedures, and language are aspects of organizational culture that might be reflected in a culture of assessment, such a culture cannot be assessed or developed without tapping into an institution’s underlying beliefs, values, and assumptions and attention or inattention to the interactions of subcultures [End Page 17] within the larger organizational environment (Bess and Dee 2012; Bolman and Deal 2013; Cameron and Quinn 2011; Kempner 2003; Keyton 2011; Manning 2013; Parker 2000; Tierney 1988, 2008). Intentional recognition of underlying institutional assumptions and beliefs has been posited to be the primary propulsion towards organizational change (Cameron and Quinn 2011; Schein 2010). However, in this study, even administrators who had a direct role in institutional research and assessment did not readily define these essential aspects of organizational culture when trying to describe the primary focus of assessment at their institutions.

Recommendations for Practice

Based on the results of this research on US assessment administrators’ perceptions of cultures of assessment at their institutions, we offer some recommendations: (a) assessing and recognizing underlying assumptions around institutional assessment; (b) reexamining the language of assessment; (c) practicing consistent reflection in all assessment practices; (d) eliciting perspectives of organizational subcultures and promoting intercultural communication among these groups; and (e) promoting innovation and grassroots strategy around assessment.

First, in order to promote and establish cultures of assessment that are truly focused on student learning and program effectiveness, findings from this study indicate a need to capture more effectively the underlying assumptions and beliefs that various stakeholders, constituency groups, or subcultures (e.g., administrators, faculty, students, and others) in a higher education institution have about assessment and their understanding of why it is done. By identifying, establishing, communicating, and nurturing stakeholders’ underlying beliefs and assumptions about the elements of organizational culture and the role of assessment at that institution, purposeful changes and shifts toward positive cultures of assessment can be more fully embraced.

In order to shift the focus from reiterating the rhetoric or espoused values (e.g., messages) of institutional leaders or naming artifacts thought to represent assessment cultures, institutional leaders need to plan for and prioritize for intentional cultural shifts. Researchers explain that cultural shifts can take between three to seven years to be fully embraced and intertwined with organizational practices (Alvesson 2013). Other scholars have offered evidence-based steps for changing organizational culture, which [End Page 18] include clarifying meaning, communicating changes through stories, collecting data, and strategizing for cultural change, among others (Cameron and Quinn 2011; Mills et al. 2005; Swenk 1999). Paradoxically, to enhance opportunities for success in establishing positive institutional cultures of assessment that truly are focused on student learning and institutional effectiveness, colleges and universities might benefit from actually assessing their own underlying cultural assumptions by using an organizational culture framework and considering the results of this study. As a resource, Jung et al. (2009) reviewed several validated instruments for exploring organizational culture. In general, processes for conducting organizational culture assessments might include conducting culture audits or organizational ethnographies and utilizing a variety of measurement tools such as the Administrators’ Survey of Assessment Culture.

Second, a review of language used to describe assessment and cultures of assessment may be beneficial. Based on the findings of this study, it appears important to separate the requirement of accreditation from the practice of assessment, and this task can begin by ensuring that accreditation is carefully discussed in terms of required outcomes of the process rather than as the ultimate goal of all assessment. Furthermore, developing a common language around the notion of cultures of assessment that goes beyond the higher education literature to include seminal interdisciplinary scholarship on the elements of culture and organizational culture would be prudent, especially if the term culture will continue to be applied in reference to cultures of assessment in higher education. Research in the fields of anthropology, organizational and management sciences, cross-cultural psychology, and communications have over fifty years of studies and theory to inform the efforts of higher education leaders and scholars who are interested in shaping or researching institutional cultures, subcultures, and organizational change. We suggest that higher education leaders and researchers may benefit from drawing from this interdisciplinary knowledge base in expanding their understanding of organizational culture applied to assessment practices.

Third, instilling a practice of consistent reflection and questioning could prompt shifts in organizational culture. For example, conversations about major decisions discussed by individual units and institutional leadership should include the question, “What data or evidence do you have to support your position?” The consistent application of this reflective question might slowly permeate throughout the organization and illuminate basic underlying assumptions around the practice of assessment. When individuals in [End Page 19] the organization know they will be asked this question, it might become part of evidence-based practice consciously or unconsciously. When the use of assessment data can be tied to rewards and additional resources, these underlying basic assumptions about assessment practices become part of the regular practice of an institution.

Fourth, as important assessment issues and challenges continue to emerge for higher education institutions, we suggest that assessment could be promoted through attention to enhancing communication and collaboration between subcultures in an organization. Organizational subcultures in higher education might be defined as the natural groupings of faculty and staff, academic affairs and student affairs professionals, functional departments, academic disciplines, stakeholder groups (e.g., alumni, current students), and others (Manning 2013; Parker 2000). Therefore, in applying the language of some organizational culture scholars, we advocate for attention to intercultural communication among various campus constituencies to ensure integration among various groups. This will assist in cultivating a shared institutional culture of assessment that is not solely top-down or predicated on meeting accreditation standards and accountability measures. In addition, a shared understanding of assessment processes allows diverse campus groups to build trust.

Finally, institutional leaders should plan for the support and resources needed to create conditions that allow for the emergence of innovative ideas about assessment. Sometimes this process might involve greater attention to and recognition of the innovative practices around student learning that already are occurring in an organization and allowing these ideas to bubble up and expand through more grass-roots strategy (Kezar and Lester 2009, Kezar and Lester 2011). Some scholars suggest, however, that creating conditions for innovation and improved practices requires considerable planning on the part of higher education leaders. Williams (2008) described the basic needs for managing innovation as including (1) challenging work, (2) organizational encouragement such as risk-taking opportunities and rewards for creativity, (3) supervisory encouragement such as clear goals and open communication, (4) work group encouragement such as diversity of membership and willingness to challenge ideas, (5) freedom and autonomy to control ideas and take ownership, and (6) removing organizational barriers through such actions as resolving conflicts and reducing process and procedure requirements. It is likely that the traditional higher education environment will not adapt easily to the changes necessary for a cultural shift, but through planning and long-term commitment these characteristics of an innovative [End Page 20] work environment can be implemented and positively influence cultures of assessment.

Future Research

Although this study did provide some ideas regarding what cultures of assessment look like on college campuses, additional research could be pursued to more fully understand the spectrum of influences on assessment cultures. For instance, this study focused on the perceptions of a specific set of administrators. It is equally important to collect data on faculty perceptions of institutional assessment, which can aid institutional leaders in finding common ground on which to build a cultural shift toward student learning. Additionally, more extensive data collection could reveal nuances in these and future data. Moreover, political and social influences on higher education could precipitate changes in assessment and accreditation that are latent and, as of yet, not explored or examined in this study. We concur with Haviland (2014) that additional studies integrating assessment and organizational theory are needed and, as continued, will result in nuanced perspectives on assessment.

Another consideration for research is how perceptions of the assessment culture may differ by institutional type. For instance, smaller institutions may be better equipped to develop a true culture of assessment because it is easier to involve all members of the campus community. In contrast, scholars have hypothesized that larger institutions may be more able to devote fiscal or human resources to assessment efforts (Ndoye and Parker 2010). Finally, conducting case study analyses on institutions with proven reputations and histories of best practices in assessment may aid in identifying high-impact tactics for establishing a culture of assessment focused on improvement of student learning. This approach could include entire institutions or specific units identified by both faculty and assessment professionals as having respected assessment cultures.

Conclusion

As the demands for accountability in higher education continue to grow and infuse into the reporting and accreditation mechanisms required by federal, state, regional, and professional accreditation bodies, the need [End Page 21] for assessment likely will increase. Successful infusion of assessment in ways that truly reflect student learning and continuous improvements will require involvement of all members of a campus community. Although the participants in this study perceived that assessment currently is propelled by external constituents, there was some evidence that the use and value of assessment, along with language and symbols to communicate the importance of assessment, potentially influence the likelihood that assessment practices will become integral to institutional values and underlying beliefs.

To uncover and enhance underlying cultural values and beliefs about strategies that most support student learning, we believe that a unity of purpose among higher education administrators and faculty is essential and indicative of true cultural characteristics. While both groups may hold varying perspectives of assessment practices, they are also dedicated to the success of students at their institutions. Using this underlying value as a starting point, collaboration among faculty members, administrators, and even students might allow for strengthening the institutional culture identity and focusing on the shared commitment to student learning while also stimulating innovation in assessment approaches. As educators, we must all tap into our own desire for learning and passion for growth as a means of harnessing these essential elements in promoting effective institutional cultures of assessment. We propose that this shared endeavor is the core of meaningful assessment and deserves further exploration. Overall, further research and consideration is needed that moves beyond a focus on the structural aspects of assessment. Based on the results of this study, we suggest that future researchers further consider the cultural elements of higher education organizations as integral to effective institutional assessment research practices, and acknowledge the shared educator identity of administrators and faculty, as well as the perspectives of students. [End Page 22]

Peggy C. Holzweiss

peggy c. holzweiss is an assistant professor in the Department of Educational Leadership at Sam Houston State University and teaches graduate courses in higher education administration. Dr. Holzweiss holds a doctorate in higher education administration from Texas A&M University. Her research interests center on teaching, learning, online instruction, and professional development.

Rebecca Bustamante

rebecca bustamante is an associate professor of educational leadership at Sam Houston State University. Dr. Bustamante’s scholarship focuses on organizational culture, assessing organizational cultural competence, and culturally responsive leadership preparation. She has held administrative and faculty positions in both the United States and Latin America.

Matthew B. Fuller

matthew b. fuller is an assistant professor of higher education administration and assistant dean of assessment in the College of Education at Sam Houston State University. He also serves as the principal investigator for the Survey of Assessment Culture. His research interests include assessment, legal issues, and history of higher education. Dr. Fuller has served as a faculty member and administrator in student affairs and assessment units since 1998.

References

Alvesson, Mats. 2013. Understanding Organizational Culture. 2nd ed. London: Sage.
Astin, Alexander W., and anthony l. antonio. 2012. “Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education.” Washington, DC: American Council on Education.
Berelson, Bernard. 1952. “Content Analysis.” In Handbook of Social Psychology, edited by Gardner Lindsey, 488–522. Cambridge, MA: Addison-Wesley.
Bess, James L., and Jay R. Dee. 2012. Understanding College and University Organization: Theories for Effective Policy and Practice. Vol. 1, The State of the System. Sterling, VA: Stylus.
Bolman, Lee G., and Terrence E. Deal. 2013. Reframing Organizations: Artistry, Choice, and Leadership. 5th ed. San Francisco, CA: Jossey-Bass.
Burke, Joseph C. 2004. “The Many Faces of Accountability.” In Achieving Accountability in Higher Education: Balancing Public, Academic, and Market Demands, edited by Joseph C. Burke, 1–24. San Francisco, CA: Jossey-Bass.
Cameron, Kim S., and Robert E. Quinn. 2011. Diagnosing and Changing Organizational Culture: Based on the Competing Values Framework. San Francisco, CA: Jossey Bass.
Chambers, T., and C. H. Chiang. 2012. “Understanding Undergraduate Students’ Experience: A Content Analysis Using NSSE Open-Ended Comments as an Example.” Quality and Quantity 46 (4): 1113–23. doi:10.1007/s11135-011-9549-3. [End Page 23]
Courts, Patrick L., and Kathleen H. McInerney. 1993. Assessment in Higher Education: Politics, Pedagogy, and Portfolios. New York: Praeger.
Driscoll, Amy, Diane C. de Noriega, and Judith Ramaley. 2006. Taking Ownership of Accreditation: Assessment Processes that Promote Institutional Improvement and Faculty Engagement. Sterling, VA: Stylus.
Douchy, Filip, Mien Segers, David Gijbels, and Katrien Struyven. 2007. “Assessment Engineering: Breaking Down Barriers Between Teaching and Learning, and Assessment.” In Rethinking Assessment in Higher Education: Learning for the Longer Term, edited by David Boud, and Nancy Falchikov, 87–100. New York: Routledge.
Ewell, Peter T. 2004. “Can Assessment Serve Accountability? It Depends on the Question.” In Achieving Accountability in Higher Education: Balancing Public, Academic, and Market Demands, edited by Joseph C. Burke, 104–24. San Francisco, CA: Jossey-Bass.
Fuller, Matthew B. 2011. “The Survey of Assessment Culture: Conceptual Framework.” http://www.shsu.edu/research/survey-of-assessment-culture/ (accessed August 22, 2011).
Fuller, Matthew B., Susan Henderson, and Rebecca Bustamante. 2015. “Assessment Leaders’ Perspectives of Institutional Culture of Assessment: A Delphi Model.” Assessment and Evaluation in Higher Education 40:331–51.
Fuller, Matthew B., and Susan Skidmore. 2014. “An Exploration of Factors Influencing Institutional Cultures of Assessment.” International Journal of Educational Research 65:9–21.
Gaston, Paul L. 2013. Higher Education Accreditation: How It’s Changing, Why It Must. Sterling, VA: Stylus.
Hartle, Terry. 2012. “Accreditation and the Public Interest: Can Accreditors Continue to Play a Central Role in Public Policy?” Planning for Higher Education 40 (3): 16–21.
Harvey, Lee, and Peter Knight. 1996. Transforming Higher Education. Ballmoor, UK: SHRE/Open University Press.
Haviland, Don. 2014. “Beyond Compliance: Using Organizational Theory to Unleash the Potential of Assessment.” Community College Journal of Research and Practice 38 (9): 755–65.
Heinerichs, Scott, R. Lorraine Bernotsky, and Loretta Rieser Danner. 2015. “Guiding Principles to Impact an Institution-Wide Assessment Initiative.” Research and Practice in Assessment 10:60–64. [End Page 24]
Jung, Tobias, Tim Scott, Huw TO Davies, Peter Bower, Diane Whalley, Rosalind McNally, and Russell Mannion. 2009. “Instruments for Exploring Organizational Culture: A Review of the Literature.” Public Administration Review 69 (6): 1087–96.
Kempner, Kenneth Mark. 2003. “The Search for Cultural Leaders.” Review of Higher Education 26 (3): 363–85.
Keyton, Joann. 2011. Communication and Organizational Culture: A Key to Understanding Work Experiences. 2nd ed. Thousand Oaks, CA: Sage.
Kezar, Adrianna J. 2011. “What is the Best Way to Achieve Broader Reach of Improved Practices in Higher Education?” Innovative Higher Education 36 (4): 235–247.
Kezar, Adrianna J., and Jaime Lester. 2009. “Supporting Faculty Grassroots Leadership.” Research in Higher Education 50:715–74. doi:10.1007/s11162-009-9139-6.
———. 2011. Enhancing Campus Capacity for Leadership: An Examination of Grassroots Leadership in Higher Education. Stanford, CA: Stanford University Press.
Krippendorff, Klaus. 1980. Content Analysis: An Introduction to Its Methodology. 5th ed. Newbury Park, CA: Sage.
Kuh, George D., and Stanley Ikenberry. 2009. More Than You Think, Less Than We Need: Learning Outcomes Assessment in American Higher Education. Urbana: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
Kuh, George D., Natasha Jankowski, Stanley O. Ikenberry, and Jillian Kinzie. 2014. Knowing What Students Know and Can Do: The Current State of Student Learning Outcomes Assessment in US Colleges and Universities. Urbana: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
Maki, Peggy. 2010. Assessing for Learning: Building a Sustainable Commitment Across the Institution. Sterling, VA: Stylus Publishing.
Manning, Kathleen. 2013. Organizational Theory in Higher Education. New York: Routledge.
McCullough, Christopher A., and Elizabeth Jones. 2014. “Creating a Culture of Faculty Participation in Assessment: Factors that Promote and Impede Satisfaction.” Journal of Assessment and Institutional Effectiveness 4 (1): 85–101. [End Page 25]
Mills, Michael, Pamela Bettis, Janice W. Miller, and Robert Nolan. 2005. “Experiences of Academic Unit Reorganization: Organization Identity and Identification in Organizational Change.” Review of Higher Education 28 (4): 597–619.
Ndoye, Abdou, and Michele A. Parker. 2010. “Creating and Sustaining a Culture of Assessment.” Planning for Higher Education 38 (2): 28–39.
O’Cathain, Alicia, and Kate J. Thomas. 2004. “Any Other Comments? Open Questions on Questionnaires—A Bane or a Bonus to Research?” BMC Medical Research Methodology 4 (25). http://www.biomedcentral.com/content/pdf/1471-2288-4-25.pdf.
Onwuegbuzie, Anthony J., and R. Burke Johnson. 2006. “The Validity Issue in Mixed Research.” Research in the Schools 13 (1): 48–63.
Parker, Martin. 2000. Organizational Culture and Identity. Thousand Oaks, CA: Sage.
Saldana, Johnny. 2009. The Coding Manual for Qualitative Researchers. Thousand Oaks, CA: Sage.
Schein, Edgar H. 2010. Organizational Culture and Leadership. 4th ed. San Francisco, CA: Jossey-Bass.
Smith, Charles P. 2000. “Content Analysis and Narrative Analysis.” In Handbook of Research Methods in Social and Personality Psychology, edited by Harry T. Reis and Charles M. Judd, 313–35. Cambridge: Cambridge University Press.
Sproull, Natalie L. 1988. Handbook of Research Methods: A Guide for Practitioners and Students in the Social Sciences. 2nd ed. Lanham, MD: Scarecrow Press.
Strauss, Anselm L., and Juliet Corbin. 1990. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: Sage.
Suskie, Linda. 2010. Assessing Student Learning: A Common Sense Guide. 2nd ed. Hoboken, NJ: Wiley.
Swanson, Barbara L., Karen E. Watkins, and Victoria J. Marsick. 1997. “Qualitative Research Methods.” In Human Resource Development Research Handbook: Linking Research and Practice, edited by Robert A. Swanson and Elwood F. Holton, 88–113. Oakland, CA: Berrett-Koehler.
Swenk, Jean. 1999. “Planning Failures: Decision Cultural Clashes.” Review of Higher Education 23 (1): 1–21. [End Page 26]
Tierney, William G. 1988. “Organizational Culture in Higher Education: Defining the Essentials.” Journal of Higher Education 59 (1): 2–21.
———. 2008. The Impact of Culture on Organizational Decision-Making: Theory and Practice in Higher Education. Sterling, VA: Stylus.
Weiner, Wendy F. 2009. “Establishing a Culture of Assessment.” Academe 95:28–32.
Williams, Chuck. 2008. Management: Student Edition. Mason, OH: South-Western.
Young, Candace C., Debra K. Cartwright, and Michael Rudy. 2014. “To Resist, Acquiesce, or Internalize: Departmental Responsiveness to Demands for Outcomes Assessment.” Journal of Political Science Education 10:3–22. doi:10.1080/15512169.2013.862502. [End Page 27]

Additional Information

ISSN
2160-6757
Print ISSN
2160-6765
Pages
1-27
Launched on MUSE
2016-07-06
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.