Russell Sage Foundation
Abstract

For several decades, policymakers have been concerned about increasing the efficiency and effectiveness of postsecondary institutions. In recent years, performance funding—which directly connects state funding to an institution’s performance on indicators such as student persistence, credit accrual, and college completion—has become a particularly attractive way of pursuing better college outcomes. But even as states have made an enormous investment in performance funding, troubling questions have been raised about whether performance funding has the effects intended and whether it also produces substantial negative side effects in the form of restrictions in access for underrepresented students and weakening of academic standards. This paper addresses these troubling questions by drawing on data richer than heretofore available. In addition to drawing on the existing body of research on performance funding, it reports data from a study of the implementation of performance funding in three leading states (Indiana, Ohio, and Tennessee) and its impacts on three universities and three community colleges in each state.

Keywords

performance funding, performance-based funding, outcomes-based funding, higher education accountability, educational accountability, public accountability, performance management, performance-based management, quality assurance, higher education policy, college quality

[End Page 147]

For several decades, policymakers have been concerned about increasing the efficiency and effectiveness of postsecondary institutions. In recent years, performance funding—which directly connects state funding to an institution’s performance on indicators such as student persistence, credit accrual, and college completion—has become a particularly attractive way of pursuing better college outcomes (Burke 2002; Burke and Associates 2005; Complete College America 2013; Dougherty and Natow 2015; Harnisch 2011; Lumina Foundation 2009; Jones 2013; Reindl and Jones 2012; Reindl and Reyna 2011; Zumeta and Kinne 2011). As of November 2015, thirty-three states have implemented performance funding programs, with several more states planning to start one within the next few years (Dougherty and Natow 2015; National Conference of State Legislatures 2015). But even as states have made an enormous investment in performance funding, troubling questions have been raised about whether performance funding has the effects intended and whether it also produces substantial negative side effects in the form of restrictions in access for underrepresented students and weakening of academic standards (Dougherty and Reddy 2013).

This paper addresses these troubling questions by drawing on data richer than heretofore available. In addition to drawing on the existing body of research on performance funding, it reports data from a study of the implementation of performance funding in three leading states (Indiana, Ohio, and Tennessee) and its impacts on three universities and three community colleges in each state (Dougherty et al. 2014b; Dougherty et al., forthcoming).

Conceptualizing the Nature and Process of Performance Funding

The goal of perform funding is to improve college and university performance, especially with regard to student outcomes such as persistence, completion of developmental (remedial) education and key college-level courses, accrual of course credits, degree completion, transfer, and job placement. These outcomes often constitute the indicators that performance funding programs use to allocate higher education appropriations.

Two kinds of performance funding programs can be usefully distinguished (Dougherty and Natow 2015; Dougherty and Reddy 2013; Snyder 2011, 2015). Performance funding 1.0 (PF 1.0) takes the form of a bonus, over and above regular state funding for higher education. The typical size of this bonus is between 1 and 5 percent of state funding (Burke 2002; Dougherty and Reddy 2013). Tennessee established its PF 1.0 program in 1979 (the first in the nation), and it exists to this day. Ohio did so in 1995 and 1997 (with the introduction of the Performance and Success Challenges), and Indiana in 2007 (Dougherty and Natow 2015; Dougherty and Reddy 2013). Performance funding 2.0 (PF 2.0) programs differ from PF 1.0 in that performance funding no longer takes the form of a bonus but rather is part and parcel of the regular state base funding for higher education. Often as well, the proportion of state appropriations funding for higher education tied to performance metrics can be much higher, as high as 80 to 90 percent in Ohio and Tennessee. Indiana and Ohio established PF 2.0 programs in 2009, followed by Tennessee in 2010 (Dougherty and Natow 2015; Dougherty and Reddy 2013).1

To understand how performance funding has operated, we draw on various research literatures. These include research on performance funding (see Burke 2002; Burke and Associates 2005; Dougherty and Reddy 2013), performance management in government (see Heinrich and Marschke 2010; Moynihan 2008), organizational learning (see Argyris and Schön 1996; Dowd and Tong 2007; Witham and Bensimon 2012), implementation theory and principal-agent theory (see Honig 2006; Lane and Kivisto 2008), and organizational change theory in higher education (see Kezar 2012).

Performance funding policies embody “theories of action” (Argyris and Schön 1996) involving causal sequences by which desired outcomes will be produced. These sequences typically involve specific “policy instruments” [End Page 148] or “mechanisms that translate substantive policy goals into concrete actions” (McDonnell and Elmore 1987, 134). The theory of action typically laid out by advocates of performance funding is that performance funding will stimulate institutional changes in academic and student-service policies, programs, and practices that in turn will result in improved student outcomes. Typically, policymakers do not specify particular institutional changes (Dougherty et al. 2014a). The main policy instrument considered by performance funding advocates is providing financial incentives that mimic the profits for businesses (Dougherty et al. 2014a; also see Burke and Associates 2005, 304; Dougherty and Reddy 2013; Massy 2011). Applied to higher education institutions, this financial incentives theory of action—which is akin to resource-dependence theory (Pfeffer and Salancik 1978)—holds that the institutions are revenue maximizers and will make a strong effort to improve their performance if the amount of funding involved is significant enough (Burke 2002, 266–72; Dougherty et al. 2014a). This policy instrument also flows from principal-agent theory, which stresses that there is often a misalignment between the interests of principals and their agents (Lane and Kivisto 2008). Monetary incentives flowing from the principals (the state) therefore become a device to bring the interests of the agents (college officials) into better alignment with those of the principals.

Despite the emphasis on financial incentives, advocates of performance funding programs have also considered other policy instruments. One is providing information to college officials and faculty about the goals and intended methods of performance funding as a means to to catalyze institutional change; the aim is to persuade colleges of the importance of improved student outcomes (Dougherty et al. 2014a; Dougherty and Reddy 2013; Massy 2011; Reddy et al. 2014; see also Anderson 2014; Ewell 1999; Rutschow et al. 2011). The idea is that once college and university personnel are convinced that a goal is socially valued and legitimate, they will modify their behavior. This instrument parallels the soft side of coercive isomorphism, which may manifest itself as pressure from governmental mandates and societal expectations (DiMaggio and Powell 1991).

Another instrument takes the form of making colleges aware of their student outcomes, particularly in comparison with other colleges. The aim is to mobilize feelings of pride and status striving (Burke and Associates 2005; Dougherty et al. 2014a; Dougherty and Reddy 2013; see also Baldwin et al. 2011; Dowd and Tong 2007; Witham and Bensimon 2012).

Advocates of performance funding have given little attention to another important policy instrument: building up the capacity of colleges to respond to the demands of performance funding, particularly through effective organizational learning in which they examine areas of substandard performance, devise new ways to improve that performance, and evaluate the effectiveness of those methods (Reddy et al. 2014; see also Jenkins 2011; Kerrigan 2010; Kezar 2005; McDonnell and Elmore 1987; Witham and Bensimon 2012). However, we examine the degree to which states have actually used this instrument as part of their performance funding programs, because capacity building has been a major feature of several recent high profile, foundation-sponsored initiatives to improve community college performance, including Achieving the Dream and Completion by Design (Nodine, Venezia, and Bracco 2011; Rutschow et al. 2011). Both programs have featured offering colleges “coaches” who work with senior administrators and institutional researchers to improve their analysis of student outcomes and decide on institutional changes to improve outcomes.

Changes in colleges’ revenues from the state, in their awareness of the state’s priorities and of their performance in relation to those priorities, and in their organizational capacities can be termed the immediate impacts of performance funding. To be effective, these impacts must in turn stimulate intermediate institutional changes involving changes to institutional policies, programs, and practices that will presumably lead to the ultimate impacts policymakers seek, such as more graduates or increased rates of job placement (Dougherty and Reddy 2013).

We also need to consider the unintended impacts of and frequent obstacles to performance [End Page 149] funding (Dougherty and Reddy 2013; Lahr et al. 2014; Pheatt et al. 2014; see also Heinrich and Marschke 2010; Moynihan 2008). Unintended impacts are results that are not intended by the policy creators but that arise as side effects of policy initiatives (Merton 1976). In the case of performance funding, they may include lowering academic standards for enrolled students or narrowing institutional missions to focus on areas rewarded by performance funding (Dougherty and Reddy 2013). Such impacts may arise when public agencies—whether in education, workforce training, health care, or social services—encounter difficulties in easily realizing the intended impacts of performance accountability by using legitimate means and instead resort to less legitimate means, such as lowering service delivery standards or restricting the intake of harder-to-serve clients (Forsythe 2001; Grizzle 2002; Heinrich and Marschke 2010; Moynihan 2008; Radin 2006; Rothstein 2008a, 2008b; also see Merton 1968, 1976; Mica, Peisert, and Winczorek 2012). The obstacles are characteristics of the performance funding program or of the target higher education institutions that impede the ability of institutions to effectively respond to the demands of the performance funding program using legitimate methods. They can take such forms as colleges’ lack of organizational capacity to adequately understand their performance problems and develop feasible and effective solutions (Dougherty and Reddy 2013).

Research Questions

The analysis in this paper is organized around six main research questions: First, what policy instruments have states used as a part of their performance funding (PF) programs to influence the behavior of institutions? What have been the immediate impacts of those instruments? Second, what deliberative processes have colleges used to determine how to respond to performance funding? Third, how have colleges altered their academic and student services policies, programs, and practices in ways that relate to performance funding goals? Fourth, what have the impacts of performance funding programs been on student outcomes? Fifth, have there been obstacles to securing the impacts intended by PF advocates? Finally, have there been unintended outcomes of PF?

Research Methods

To answer these questions, we analyzed the performance funding experiences of three states (Indiana, Ohio, and Tennessee) and within each state, three community colleges and three public universities. For data triangulation, we conducted numerous interviews in each of the three states with a diverse range of individuals involved with performance funding. We also analyzed available documentary data, including public agency reports, newspaper articles, institutional websites, and academic research studies (books, journal articles, and doctoral dissertations).

Why Indiana, Ohio, and Tennessee? These three states are leaders in performance funding—particularly PF 2.0—but otherwise differ substantially in the histories of their performance funding programs and in their political and socioeconomic structures, as table 1 shows.

In terms of policy history, Tennessee established a performance funding 1.0 program in 1979, the first state to do so. Ohio first adopted it much later, in 1995, Indiana later still, in 2007. In 2009, Indiana and Ohio adopted new PF 2.0 programs, and Tennessee followed in 2010 (Dougherty and Natow 2015; Dougherty and Reddy 2013). The Ohio and Tennessee PF 2.0 programs tie a much larger proportion of state appropriations for higher education to performance indicators than Indiana does: 80 to 90 percent as compared with 6 percent in Indiana.

The states also differ in the degree of centralization of their public governance systems for higher education. All but one of Indiana’s community college campuses operate under a single governing board (Ivy Tech), and its university campuses operate under five governing boards.2 At the other extreme, in Ohio, all twenty-three of the community colleges and all [End Page 150]

Table 1. Programmatic, Political, Social, and Economic Characteristics of the Case Study States
Click for larger view
View full resolution
Table 1.

Programmatic, Political, Social, and Economic Characteristics of the Case Study States

[End Page 151]

thirteen of the university main campuses have their own governing boards (McGuiness 2003).

The states also vary significantly in political culture and structures (Gray, Hanson, and Kousser 2012). Tennessee and Indiana are above average in the conservatism of their electorates, whereas Ohio is very near the national average (Erikson, Wright, and McIver 2006). The three states also differ in the characteristics of their political institutions, with Ohio’s governor having more institutional power and its legislature a higher degree of legislative professionalism than Indiana’s or Tennessee’s (Ferguson 2013; Hamm and Moncrief 2013). Moreover, Ohio and Tennessee tend to have greater political party competition than Indiana (Holbrook and La Raja 2013).

Finally, the states differ considerably in their social characteristics: population, income, and education. Ohio’s population is substantially larger, wealthier, and better educated than those of Indiana and Tennessee, as shown in table 1.

Which Colleges and Universities?

This study examines the experiences of eighteen public higher education institutions with performance funding: nine community colleges and nine universities. The community colleges and universities differ in their expected capacity to respond effectively to performance funding. Using data from the Integrated Postsecondary Education Data System (IPEDS) survey of 2011 and other data, expected organizational capacity was measured based on college resources (IPED data on revenues per full-time equivalent student), data-analytic capacity (ratings by two experts in each state), and number of at-risk students (IPEDS data on percentage of students receiving Pell Grants and percentage of minority students). We rated all the community colleges in each state as being in the top, middle, and bottom third on each of these three dimensions, summed the ratings, and picked one college in each state from each third. We have labeled these colleges as having high, medium, or low capacity. We also rated all the public universities in each state along the same dimensions and selected two universities that were high and low in their expected capacity to respond to performance funding, using the same capacity measure as for the community colleges. We labeled these universities either high 1 or low. For comparison, we also selected a third university in each state that was also high capacity but not a research-intensive institution. We labeled it high 2.

Data Collection and Analysis

We interviewed 261 state officials, state-level political actors, and institutional administrators and faculty at the eighteen institutions (see table 2). We also drew on documentary sources such as public agency reports, newspaper articles, and academic research studies (books, journal articles, and doctoral dissertations) to supplement our findings. At the state level, we interviewed higher education commission officials, gubernatorial advisors, legislators and members of their staff, business leaders, and researchers and consultants. The institutional respondents included senior administrators (the president and the vice presidents reporting to the president), deans and other middle-level academic administrators, nonacademic middle-level administrators such as the director of institutional research, chairs of different departments representing a range of disciplines and degrees of exposure to outside accountability demands, and the chair of the faculty senate. We relied on the department chairs and the chair of the faculty senate to illuminate the range of faculty opinion.

The interviews were semistructured and lasted approximately one to two hours. Although we used a standard protocol, we adapted it to each interviewee and to material that emerged during an interview. Moreover, after conducting a cross-case analysis of our initial community college interviews, we added several questions to the interview protocol we used for our remaining community college and university interviews to better pinpoint certain processes and impacts. All institutions and interviewees were promised confidentiality, and we have masked their identities.

The interviews were transcribed and coded using the Atlas.ti qualitative data analysis software system. We also coded documentary materials if they were in a format that allowed importing [End Page 152] it into Atlas. Our coding scheme began with an initial list of “start” or thematic codes drawn from our conceptual framework, but we added and altered codes as necessary as we proceeded with data collection and analysis. New codes were added and existing codes modified as we discovered unexpected patterns in our data during our periodic cross-case analyses of the interviews. To analyze the data, we ran queries in Atlas based on our key coding categories. Using this output, we created analytic tables comparing how different interviewees at different kinds of institutions perceived the implementation and operation of performance funding.

Table 2. Categories of Interviewees
Click for larger view
View full resolution
Table 2.

Categories of Interviewees

POLICY INSTRUMENTS AND THEIR IMMEDIATE IMPACTS

We begin by describing the four policy instruments that could be used for performance funding: financial incentives; disseminating information about the goals and methods of performance funding; communicating to colleges how they are doing on the state performance funding metrics; and building up institutional capacity to respond to performance funding. We analyze how these instruments were used in our three states and what immediate impacts they had on institutions. Our documentary analysis and interviews with campus personnel yield substantial evidence that the first three instruments are all operating and having substantial impact in our three states. Although the financial incentives seemed to have the most impact, it is also clear that the two informational instruments also had important impacts of their own. Little evidence indicates, however, that building up institutional capacity was a significant policy instrument used by those states and that it had much of any impact (for a full analysis, see Dougherty et al., forthcoming; Reddy et al. 2014).

Financial Incentives

We find evidence that college leaders are following the money and that college personnel [End Page 153] further down the institutional hierarchy (such as faculty and mid-level administrators) are aware that student outcomes now impact their institution’s bottom line. To be sure, of our 141 institutional respondents who felt comfortable assessing the size of annual budget variations, two-thirds indicated that their state’s performance funding program had little to no impact on their college’s budget.3 However, most of our institutional respondents also reported that the financial incentives attached to performance funding were having a substantial impact on campus efforts to improve student outcomes. Of the 124 institutional respondents answering this question,4 half (61) rated the impact as high. A mid-level administrator at a university in Tennessee put it this way:

I think it does have a big impact. And I think it establishes sort of officially that this is the business that we’re in, and we always should have been in this business. But now we’re going to be funded, and anybody who wants to do anything creative, new, expanding whatever, they are going to have to sort of justify it by the funding that comes with these numbers. So yeah, I mean, I think it’s a sea change, at least for us on this campus.

Disseminating Information on PF Goals and Methods

Disseminating information as to what the state priorities are and just how performance funding is intended to function can further help to align the motivations of policymakers and campus personnel (see Anderson 2014). State actors and institutional personnel in all three states testified to extensive efforts on the part of state higher education officials to communicate the goals and methods of their performance funding programs to local college personnel, either directly from the state or indirectly through senior college administrators. However, we also received many responses indicating that awareness of the programs was quite uneven within institutions. Nearly one-fifth (38 of 222) of our respondents stated that they had not received any communication—direct or indirect—from the state on the goals and methods of performance funding. Those reports tended to be concentrated among faculty and middle-level administrators (for similar findings on Washington State, see Jenkins et al. 2012). The main explanations for this lack of awareness involved competing demands on faculty time and attention, lack of faculty involvement in decision-making situations where performance funding was relevant, administrative decisions to hold back information when they felt it was not relevant to faculty, and communications breakdowns. In the end, however, of the 123 institutional respondents who rated the impact of the dissemination of information about program goals and methods on college efforts to improve student outcomes, 46 percent did so as high and 27 percent as medium. For example, a dean at an Indiana community college said this:

They’re really letting people know, “This is a serious issue.” And again, like I said, it’s not all being driven by the fact that its money involved, but there’s an awful lot of “It’s the right thing to do. This is a serious problem for the country; we need to see what we can do to solve that problem.”

Disseminating Information on Institutional Performance

Our data indicate that state efforts to mold institutional action through provision of information about how the institutions were doing on the state metrics were spottier and had less impact than their efforts to disseminate information about state goals. More than a third (79 of 221) of our institutional respondents said [End Page 154] there was no communication, direct or indirect, from the state. Moreover, a large proportion did not respond when we asked them what impact state communication may have had on institutional efforts to improve student outcomes. Still, the impact of information about institutional performance could be considerable. Of the 101 who responded, 51 percent rated the impact as high and 27 percent as medium. A senior administrator of an Ohio university described the ability of performance funding programs to induce status-competition between institutions:

I’d say the financial impact was completely overshadowed by these other features about this university’s reputation and where it really wanted to focus and maintain its status, relative to the other public institutions in the state as well as some of the private schools with whom we know we compete for similar students.

Building Up Organizational Capacity

We find little evidence that building organizational capacity—to collect and analyze data on student outcomes, devise and fund interventions to improve them, and evaluate those interventions—was an important policy instrument in implementing performance funding. To be sure, the state officials we interviewed did mention some efforts to build up the capacity of colleges, such as Ohio’s building of a state data infrastructure that would make it easier for colleges to analyze data and Tennessee hosting two-day College Completion Academies at which participating institutions could learn about institutional practices to improve student outcomes (Dougherty et al. 2014a). Still, among the 173 institutional respondents who rated the extent of state effort to build up institutional capacity, 95 percent rated it as low or nonexistent. A mid-level Tennessee university administrator observed:

I just think the state is saying, “It’s up to you to find efficiencies, and it’s up to you to do what you need to do to increase outcomes. And if you do a good job, we’re going to give you more money.” But they didn’t [give] any kind of seed money to start any of these new things.

This weak state effort to build up the capacity of colleges to collect and analyze data on student outcomes, determine effective ways to improve them, pay the cost of those interventions, and evaluate their effectiveness is important. It contributes to one of the obstacles colleges encounter in trying to respond to performance funding: inadequate organizational capacity. We return to this point later.

We have no reason to believe that Indiana, Ohio, and Tennessee are unusual in their lack of sustained attention to capacity building. Little evidence indicates that others states with performance funding programs are devoting much attention to it either. We regard this lack of attention as a central problem with performance funding programs as they now exist.

ORGANIZATIONAL LEARNING IN RESPONSE TO PERFORMANCE FUNDING

In our interviews, we asked respondents about what kind of deliberative process their colleges used to consider how to respond to the pressure from the state performance funding program for improved student outcomes (Dougherty et al., forthcoming; Jones et al. 2015). We discovered that the colleges relied both on their established bureaucratic processes and on special purpose deliberative structures to investigate and make decisions about policies and practices that would improve performance funding outcomes. The established bureaucratic “general administrative structures” have a long-standing place in the administrative hierarchy, typically existed before performance funding was implemented, and most likely will continue if performance funding were to end. They take such forms as a designated position, such as vice president for student effectiveness, or regularly constituted groups, such as a president’s or dean’s council. A dean at a Tennessee community college listed a variety of general purpose deliberative structures used to respond to performance funding:

There’s a vice president’s council which makes some decisions and then we have a [End Page 155] learning council which is more the academic deans and the directors of financial aid and admissions . . . all those folks who are the support for the academic side of the house. And so, yes, we come together and we talk about what performance funding indicators . . . what we want those to be, what we think we can reach, how much we want to put into this particular indicator and how much we want to put into that one. And then we, as deans, take it back to our departments for conversations and get inputs from our departments.

However, we also found that colleges frequently used more informal and temporary organizational structures to monitor and improve their performance on state funding metrics. These “special purpose deliberative structures” have been set up for a specific goal, are often newer, are not part of the main bureaucratic administrative structure, and are not intended to be permanent. They take such forms as strategic planning committees, accreditation self-study task forces, or college committees to coordinate an institution’s response to external initiatives such as the Achieving the Dream and Completion by Design initiative of the Lumina and Gates Foundations, which work with colleges to improve student outcomes. For example, in Indiana, special purpose structures arose in response to community colleges’ involvement with the Achieving the Dream (ATD) initiative and then became devices for responding to performance funding. A senior administrator at an Indiana community college noted how its ATD committee became the college’s vehicle for deliberation on how to respond to performance funding:

Once we joined Achieving the Dream . . . we convened panels of faculty and staff from the various regions to address individual issues like student orientation, individual academic plans, and these groups of faculty and staff came up with several proposals. . . . We have not to my knowledge had any meetings specifically for performance funding. We do have meetings on a regular basis though on, again, the Achieving the Dream goals. But this kind of similar, like I say, the performance funding has just kind of fallen [into a] one-to-one relationship with our Achieving the Dream efforts.

INSTITUTIONAL CHANGES IN KEEPING WITH THE AIMS OF PERFORMANCE FUNDING

In this section we examine how universities and community colleges in all three states altered their academic and student services policies, programs, and practices following the advent of performance funding in ways that relate to achieving the goals of performance funding. A major theme is the difficulty in disentangling the impact of performance funding from other factors that operated concurrently (for the full analysis, see Dougherty et al., forthcoming; Natow et al. 2014).

Determining the Impact of Performance Funding

In our interviews, we asked our institutional respondents what changes their institutions made in response to performance funding. However, many of our respondents found it difficult to answer this question in any simple way. They noted that performance funding has been but one of several concurrent external influences that seek to improve higher education institutional outcomes. States have recommended or even legislatively mandated such institutional changes as lowering the number of credits required for degrees, enhancing course articulation and transfer, and reforming developmental (remedial) education. Institutions are also influenced by accreditors, foundations, and other nonprofit associations—such as the Gates and Lumina foundations and Complete College America—that fund or otherwise advocate for particular reforms. In light of all of these concurrent influences, it is difficult to differentiate the impact of performance funding from that of other external influences (for a similar finding on Washington State, see Jenkins et al. 2012). For example, when asked about programmatic changes in response to performance funding, a senior administrator at a Tennessee university said this: [End Page 156]

I think part of the challenge with your question is that the things that I’m walking through [with you] are not just simply because of the new [performance funding] formula or the old formula. They are the result of policy directives from the board. They are the results of questions from regional and professional accrediting entities. They are the result of public pressures. So it’s not just simply the formula, it’s a national mood and a national conversation around the importance of completion.

On the whole, there is reason to believe that the coincidence of performance funding with other policy initiatives to improve student outcomes has produced synergy rather than interference. Institutional responses to a given external initiative were often quite useful to responding as well to performance funding demands. Colleges frequently used special purpose deliberative structures developed to respond to accreditation demands or initiatives such as Achieving the Dream to also craft their responses to performance funding.

Changes in Academic Policies, Programs, and Practices

The two most common campus-level academic changes following performance funding adoption have been to alter developmental (remedial) education and change course articulation and transfer. Other commonly adopted academic practices include changes to tuition and financial aid policies, registration and graduation procedures, and student services departments (Natow et al. 2014).

Developmental Education

Respondents at ten of our eighteen institutions—particularly at community colleges but also at some universities—reported making changes in developmental education (also known as remedial education). Changes to developmental education involved both curricular and instructional changes. A way one community college in our sample restructured its developmental education was through preterm remediation, in which students could enroll in remedial classes during the summer before their first fall term. In other instances, developmental education students were enrolled in developmental courses at the same time as college-level courses. In Indiana, this corequisite model is a statewide mandate for community colleges separate from the performance funding program (Ivy Tech Community College 2014).

Performance funding provided an incentive for this insofar as developmental education success was a performance indicator for community colleges in Ohio and Tennessee. At the same time, in all three states, developmental education reform was mandated or incentivized by state legislation or other state or private initiatives separate from performance funding (Boatman 2012; Ivy Tech Community College 2014; Quint et al. 2013). Thus, although the developmental education reforms in these states are certainly consistent with the goals of performance funding, other forces were influential as well. It is difficult to know the extent that performance funding influenced these changes.

Course Articulation and Transfer

Another common academic change, which was reported at eight of our eighteen institutions, was to improve course articulation and transfer, particularly between community colleges and universities. Performance funding certainly played a role because transfer numbers are a performance funding metric in Ohio and Tennessee. The performance-based funding formulas in those two states reward colleges for students transferring out to another institution with twelve or more credits (Ohio Board of Regents 2013; Tennessee Higher Education Commission 2011a, 2011b). But other influences are also at work. The Complete College Tennessee Act that revamped the higher education funding formula also mandated other efforts to improve transfer between community colleges and universities (State of Tennessee 2010).

Changes in Student-Services Policies, Programs, and Practices

The two most commonly made campus-level student services changes after performance funding was adopted have been to change advising and counseling services as well as tutoring [End Page 157] and supplemental instruction (for other changes, see Natow et al. 2014).

Advising and Counseling

All eighteen of our institutions made changes in advising and counseling. Such changes included adding more academic advisors or counselors, creating online advising systems, asking faculty members to play a greater role in student advising, and using early alert or early warning systems that notify advisors when students are in danger of dropping out. Institutions saw these changes as helping improve institutional performance on performance funding metrics for credit accrual and degree completion. However, it was also clear that some of these institutional responses were also seen as driven by state-mandated changes that were independent of performance funding.

Tutoring and Supplemental Instruction

Next to advising, the student services changes made with the most frequency involved tutoring and supplemental instruction. Respondents at thirteen of our eighteen institutions reported such changes. Tutoring changes included creating new tutoring centers, providing online tutoring, and requiring faculty to meet personally with students.

STUDENT OUTCOMES

Given the rather extensive changes institutions have made in response to performance funding, the question is whether this has resulted in a significant improvement in student outcomes. As it happens, we have no research definitively establishing that.

To be sure, we do have evidence that graduation numbers in Indiana, Ohio, and Tennessee have risen faster than enrollment in the years since the introduction of the performance funding 2.0 programs in those states (see Dougherty et al., forthcoming; Postsecondary Analytics 2013). However, this by no means settles the issue. Even if student outcomes improve after the introduction of performance funding, the improvements could be influenced by many other factors, such as growing enrollments (which alone could produce rising graduation numbers), modifications to state tuition and financial aid policies, and other efforts to improve student outcomes (such as recent state initiatives to improve counseling and advising, developmental education, and transfer between institutions). Hence, it is important to conduct multivariate statistical analyses that strive to control for the many other factors that might account for improvements in student outcomes.

Most of these multivariate analyses focus on graduation from public four-year colleges, though some also consider graduation from community colleges and retention in both two-year and four-year colleges. The studies compare states with and without performance funding using a variety of multivariate statistical techniques (such as difference-in-differences or hierarchical linear modeling) and controlling for a variety of institutional characteristics (such as median test scores, student income and racial composition, and institutional spending on instruction), state policies (such as average tuition for two-year and four-year colleges, state financial aid per student, and state appropriations per student), and state socioeconomic characteristics and conditions (such as population size and state unemployment rate) (Dougherty and Reddy 2013, table A2; Dougherty et al., forthcoming).

Four-Year College Graduation

Most of these studies focus on baccalaureate completions at public four-year colleges, analyzing either graduation rates or number of degrees awarded. The predominant finding is that performance funding does not have a significant impact on four-year graduation for institutions and states (Hillman, Tandberg, and Gross 2014; Larocca and Carr 2012; Rutherford and Rabovsky 2014; Sanford and Hunter 2011; Shin 2010; Shin and Milton 2004; Tandberg and Hillman 2014; Umbricht, Fernandez, and Ortagus 2015). For example, using a difference-in-differences design with state and year fixed effects to compare states with and without performance funding, David Tandberg and Nicholas Hillman (2014) examine the impact of performance funding on number of baccalaureate degrees awarded by public four-year colleges. They control for various higher education system characteristics (including percentage [End Page 158] of students enrolled in the public four-year sector, in-state tuition at public two-year and four-year colleges, state aid per public FTE, and state appropriations per public FTE) and various state-level socioeconomic characteristics (including population size, poverty rate, unemployment rate, and gross state product per capita). Comparing states with and without performance funding for four-year colleges, the authors find no average impact of performance funding on changes between 1990 and 2010 in the number of baccalaureate degrees awarded by states with performance funding. As a robustness check, they do comparisons involving lagged and nonlagged effects and three different comparison groups of states without performance funding: all states, states contiguous to performance funding states, and states with coordinating-planning boards (the type most common among performance funding states).

Although the multivariate analyses of four-year graduation do not find that performance funding on average has an impact, there is an interesting finding. Tandberg and Hillman (2014) find that performance funding had a positive impact on bachelor’s degree production beginning seven years after the performance funding programs were established in the few states that had programs lasting that long. They note that this suggests that performance funding programs may need some time before they produce effects. Programs are sometimes phased in over time. Institutions need time to react to performance funding demands and make necessary changes. And enough time needs to pass to see students through to graduation, which often comes five or six years after college entrance (Tandberg and Hillman 2014; see also Dougherty et al., forthcoming).

Two-Year College Graduation

Two multivariate studies have been conducted on the impact of performance funding on student completions at community colleges (Hillman, Tandberg, and Fryar 2015; Tandberg, Hillman, and Barakat 2014). The authors find a significant impact on completion of short-term certificates but no impact, on average, on completion of long-term certificates or associate degrees. The latter finding has some interesting wrinkles, however.

Using a difference-in-differences fixed effects analysis comparing institutions in states with performance funding and those in various combinations of states without performance funding for community colleges (all states and neighboring states),5 two recent studies find that performance funding has no impact, on average, on associate degree completion (Hillman, Tandberg, and Fryar 2015; Tandberg, Hillman, and Barakat 2014). The control variables included higher education characteristics and state or local socioeconomic characteristics.6 However, despite finding no average effect, both studies did find more localized impacts of interest. Tandberg and his colleagues (2014) find that—across six separate equations—four states evidence a significant positive impact of performance funding on associate’s degree completion, although they also find evidence of a negative impact in six states, mixed impacts in three states, and no impact in six states. Moreover, Hillman and his colleagues (2015) find that performance funding for community colleges in Washington had a delayed impact on associate’s degree completion beginning four years after the program was established in 2007. They also find a positive impact of Washington’s Student Achievement Initiative on short-term certificate awards (less than one-year) in comparisons of Washington [End Page 159] with three combinations of states. However, performance funding had a negative impact on the awarding of long-term certificates.

Retention at Four-Year and Two-Year Colleges

A few multivariate studies have also been conducted of retention rates and almost without exception they find no impact of performance funding. Roger Larocca and Douglas Carr (2012) find that two-year colleges in states with performance funding had higher one-year retention rates than their counterparts in states without performance funding. However, Hillman, Tandberg, and Fryar (2015) find no impact of performance funding on community college retention in Washington. Four other studies also found no effect of performance funding on retention in public four-year colleges (Huang 2010; Larocca and Carr 2012; Rutherford and Rabovsky 2014; Sanford and Hunter 2011).

In sum, the multivariate studies conducted to date largely fail to find evidence that performance funding improves retention and graduation. However, several interesting findings of more localized effects involve delayed effects on four-year college graduation, impacts on short-term community college certificates, and, in some states, impacts on community college associate’s degrees.

These multivariate studies primarily examined PF 1.0 programs, which do not tie much state funding to performance indicators. Although PF2.0 programs have become much more common, only a few existed before 2007 (see Dougherty and Natow 2015). Hence, only a few PF 2.0 programs are captured by the existing studies of performance funding impacts through 2010, and they are captured very early in their development. We have only three studies that examine performance funding 2.0 programs in any depth (Hillman, Tandberg, and Gross 2014; Hillman, Tandberg, and Fryer 2015; Umbricht, Fernandez, and Ortagus 2015). Nonetheless, it is instructive that all three find that performance funding 2.0 programs do not have a significant impact on student outcomes. For example, Hillman and his colleagues (2015) examine the impacts of performance funding in Indiana, Ohio, and Tennessee using a difference-in-differences analysis, controlling for the local unemployment rate and the following institutional characteristics: enrollment, proportion of students who are white, proportion part-time, tuition level, operating revenues, and revenues from the state. In eleven of twelve models (four for each state), they find that performance funding had no multiyear average positive impact on graduation numbers.7

Performance Funding Outcomes Outside Higher Education

Studies of the impact of performance accountability programs in other policy areas besides higher education have arrived at mixed results. Studies of the federal No Child Left Behind program and of similar state accountability programs in Florida and Texas have found evidence of significant impacts on student achievement, though these impacts are not uniform across subjects and grades (Dee and Jacob 2011; Deming et al. 2013; Rouse et al. 2007). On the other hand, a study of the impact of the Schoolwide Performance Bonus Program in New York City found no impact on student achievement (Marsh et al. 2011). Similarly, studies of the performance standards attached to the Job Training Partnership Act (JTPA) programs have also yielded mixed findings. They find that JTPA did lead training centers to produce the intended results in terms of immediate employment and short-term earnings improvement. However, those immediate results are very weakly correlated with earnings and employment eighteen and thirty months after completing training (Cragg 1997; Heckman, Heinrich, and Smith 2011).

If performance funding for higher education so far has had less impact than performance accountability in other policy areas, it could be simply because, until recently, it has not been tied to that much state funding. More [End Page 160] pronounced impacts could emerge if states follow the lead of Tennessee and Ohio in tying much larger portions of state funding for higher education to performance metrics, though we do not yet have definitive data on what impact those programs have had (Dougherty et al., forthcoming). However, the lack of impact of performance funding for higher education so far could also be testimony to the substantial obstacles it encounters to its effective operation. Could the lack of impact stem from obstacles institutions and campus personnel encounter in responding effectively to performance funding (Dougherty and Reddy 2013; Hillman, Tandberg, and Gross 2014; Tandberg, Hillman, and Barakat 2014)? If so, what forms do such obstacles take? We now turn to analyzing the obstacles that higher education institutions encounter in responding to the demands of performance funding programs.

OBSTACLES TO EFFECTIVELY RESPONDING TO PERFORMANCE FUNDING

Consistent with previous research (Dougherty and Reddy 2013), we find that institutions in our three states encounter several persistent obstacles that hinder their efforts to perform well on the state metrics. Our respondents perceived improvement in student outcomes as primarily inhibited by the demographic and academic composition of their student bodies (in the case of community colleges and broad-access public universities), inappropriate performance funding metrics, and insufficient institutional capacity. Other obstacles mentioned less often included institutional resistance, inadequate state funding of higher education, insufficient institutional knowledge of performance funding, instability in performance funding, indicators, and measures, and insufficient state funding of performance funding (for our full analysis, see Dougherty et al., forthcoming; Pheatt et al. 2014).

Student Composition

With regard to student composition, sixty-three of our respondents at sixteen of the eighteen institutions stated that the most difficult obstacle they perceived to responding to the funding formula is that open-access institutions enroll many at-risk students who face social and economic challenges that make it difficult for them to persist and graduate and therefore contribute to good institutional results on state performance metrics. When asked about specific ways student composition hinders institutional performance, twenty respondents at ten institutions (mostly community colleges) pointed to student academic preparation. Their institutions, they reported, take in many students who are not well prepared academically and therefore less likely to do well on the state metrics, particularly graduation. An Ohio community college dean noted this:

I think our student population comes in incredibly unprepared and without the foundations skills, without what would be considered college level reading, writing and comprehension. So quite honestly . . . they just don’t have the skills—whether it be that they never learned how to study in high school, whether it be they got passed through high school—but they just don’t know how to attack college and the level of work that’s required in a college class.

Similarly, seventeen respondents at nine institutions (again mostly community colleges) pointed to the fact that a good number of their students come in without a desire for a degree, which also makes it less likely they will graduate. In fact, among college entrants surveyed in their first year as part of the 2003–2004 Beginning Postsecondary Students survey, 16 percent of two-year entrants but only 6 percent of four-year entrants stated that they did not intend to receive a certificate or degree (Berkner and Choy 2008, 7–8). From a high-level community college administrator in Tennessee, we heard this:

I think all of our sister institutions that are community colleges will be experiencing something very similar. . . . The students that come to community college may not all be intending to earn an associate’s degree. They may be coming to upgrade some of their skills as incumbent workers. There may be [End Page 161] some students that are coming back to re-tool in certain areas. So a completion agenda may not always be first and foremost for a community college student the same way it would be for a four-year university student.

Although it is clear that these sentiments are heartfelt on the part of our community college respondents, they could have a self-serving element. The great stress on student composition as an obstacle could verge on “blaming the victim” and allow institutions to escape from having to examine how their policies and programs might be contributing to poor student outcomes (Kezar et al. 2008; Witham and Bensimon 2012). On the other hand, it would be unfair to the broad-access two-year and four-year colleges to argue that they do not face obstacles greater than those that selective resource-rich four-year institutions face.

Inappropriate Metrics

In good part because of the differences between institutions in student composition and organizational mission, many of our respondents (sixty-one respondents at seventeen institutions) also stated that institutional responsiveness to performance funding was often hindered by a poor match between performance funding metrics and institutional missions and capacities. Respondents at community colleges often perceived the state performance funding programs as being unfair insofar as they held them to the same graduation expectations as four-year institutions. These respondents argued that many students at community colleges do not intend to get a degree, unlike students at four-year institutions, or will have difficulty doing so in a timely fashion given their poorer academic preparation and more difficult life circumstances. As a senior community college administrator in Indiana noted,

The state [is] not understanding the mission of the community college, as compared to four-year universities. And they evaluate us on the same plane, or they try to. For example, people in a community college have a different mission. They may be married, they may be working, and they may be laid off. . . . It could be all of those things in life that can screw you up. . . . We should not be judged the same.

Meanwhile, respondents at high-capacity universities, particularly in Indiana, were frustrated because they felt their institutions had little room to improve. They felt there was a ceiling effect in that institutions already doing well had little room to make big jumps in student outcomes.

Inadequate Organizational Capacity

Finally, many of our respondents (forty-two respondents at fourteen institutions) pointed to their institutions’ lack of organizational capacity. The most frequently reported deficiency involved too little institutional research (IR) capacity. A Tennessee community college dean noted, “Any time you talk about implementing any programs or additional assessment . . . anything of that nature . . . [it] requires resources. And our IR department is woefully understaffed.” This underscores the importance of state support for the development of IR capacity. But as we note in our discussion of policy instruments, capacity building of this sort is something that the states have not paid much attention to (Dougherty et al., forthcoming; Reddy et al. 2014).

Tennessee had considerably fewer respondents mentioning obstacles than Indiana and Ohio did. This may in part be because Tennessee has had the longest history of performance funding, so more of the kinks may have been worked out, and college respondents may have become more comfortable with performance funding. Also, our data suggest that—in good part because of a long history of extensive consultation between the state higher education coordinating board and institutional officials (Dougherty and Natow 2015)—Tennessee college administrators and faculty were more aware of and better understood the performance funding policy in their state than did their counterparts in Indiana and Ohio. This would lessen reports of insufficient knowledge as an obstacle (see Reddy et al. 2014).

The presence of reported obstacles to institutions being able to respond effectively to performance [End Page 162] funding pressures raises the specter that they may resort to illegitimate methods to succeed (Dougherty and Reddy 2013; Moynihan 2008). The sociologist Robert Merton identified this conjunction of high societal pressure to succeed but structural constraints on being able to do so legitimately—a condition he termed “anomie,” following the lead of Emile Durkheim—as a major source of deviance (Merton 1968, 1976). Do we see the organizational equivalent in the case of higher education institutions exposed to strong pressure to perform well by performance funding programs but also facing significant obstacles to doing so? That is the subject of our next section.

UNINTENDED IMPACTS OF PERFORMANCE FUNDING

Besides its intended impacts, performance funding can also generate unintended impacts not desired by policy framers.8 Our respondents reported numerous undesired impacts, actual and potential, particularly weakening of academic standards and restrictions in college admissions of less-prepared students who might not do as well on performance measures. These negative unintended impacts have been reported as well in Dougherty and Reddy’s review of the literature on performance funding in higher education (2013). Moreover, similar impacts—involving deterioration in service delivery quality and adverse risk selection (or “cream skimming”) appear in analyses of the use of performance accountability in K–12 education (Rothstein 2008a, 2008b), social welfare programs (Wells and Johnson 2001), workforce training programs (Heckman et al. 2011; Rothstein 2008b), health care (Lake, Kvam, and Gold 2005; Rothstein 2008b; Stecher and Kirby 2004), and public services generally (Grizzle 2002; Heinrich and Marschke 2010; Moynihan 2008).

We classified instances as actual or observed when the interviewee discussed that an impact has occurred or concrete steps have been taken toward producing it (for example, specific steps have been already taken by the college to change admission practices in ways that restrict access for certain kinds of students). Unintended impacts are classified as potential if the respondent noted the possibility of a certain impact occurring, but it has not yet occurred or no clear steps have yet been taken toward producing it.

The unintended impacts most commonly mentioned were restrictions in admissions to college and weakening of academic standards. Others included compliance costs, less institutional cooperation, decrease in staff morale, reduced emphasis on missions not rewarded by performance funding, and weaker faculty voice in academic governance (see Dougherty et al., forthcoming; Lahr et al. 2014).

These unintended impacts may bear an important connection to the obstacles we analyze earlier. When institutions are not successful using legitimate methods because they encounter major obstacles, they may resort to illegitimate ones to realize socially expected goals (see Merton 1968, 1976; Mica, Peisert, and Winczorek 2012).

Admission Restriction

Sixty-seven interviewees at five of nine community colleges and five of nine universities reported that restriction of admissions was an actual or potential unintended impact of performance funding. Forty-one mentioned a potential impact that might occur, twenty-six reported an impact that had occurred. All but one report of an actual impact came from university respondents.

Restriction of admission could improve institutional performance on performance funding metrics by lessening the proportion of students who are less prepared academically and otherwise less likely to graduate. For example, a senior administrator from an Indiana four-year institution said that because of the pressure [End Page 163] from performance funding, the institution is less likely to offer admission to “weaker” students “because if they are weaker . . . there is a chance they will bring down your performance numbers.” This might make organizational sense, but it is a troubling development at the societal level. Community colleges and broad-access four-year colleges have historically been committed to increasing opportunity for higher education for less advantaged students. It is very troubling if they begin to back away from this mission at a time when concern is great about increasing inequality in access to higher education (Karen and Dougherty 2005; Mettler 2014).

According to our respondents, restriction of admission of students who are less likely to graduate could occur through a variety of means, such as higher admission requirements, selective recruitment, and shifting institutional financial aid toward better-prepared students (see also Lambert 2015; Umbricht, Fernandez, and Ortagus 2015).

Higher Admissions Requirements

Clearly, colleges can restrict admission of less-prepared students by requiring higher standardized test scores and grade point averages or by decreasing the number of conditionally admitted students who are accepted. A mid-level nonacademic administrator at an Ohio university noted,

Instead of a graduation rate of 80 percent, we really need to bump that up so that we have a higher graduation rate. And some of that is being achieved by [changing] the type of student that we bring in. . . . So by raising our average ACT score of our incoming class by one point, the question is, “Can we anticipate then higher course completions, higher number of degrees awarded?” . . . So yes, there’s a deliberate approach being made by our enrollment management office.

Selective Recruitment

To maximize the likelihood that they enroll students more likely to graduate, institutions are increasing or might increase their efforts to attract better-prepared students, including suburban, out-of-state, and international students. At the same time, respondents discussed how their institutions might deemphasize or are deemphasizing recruitment of students from high schools with many less well-prepared students. A senior administrator at a four-year institution in Ohio observed,

There’s a recognition [as has been brought up in some discussions] of the fact . . . that the more we focus on suburban kids with high GPAs and high ACT scores, the less we’re able to serve . . . an urban population that tends to be from poorer school districts. . . . I mean there’s a tension between continuing to recruit a very diverse student population and being an urban-serving institution and being an institution that has high performing students who are successful in getting a degree.

(quoted in Lahr et al. 2014)

As it happens, a news article in the Dayton Daily News (Lambert 2015) reported that a number of Ohio universities are increasing their efforts to recruit students from suburban high schools. A senior administrator at an Ohio public university is quoted as stating, “We are telling our recruiters to expand the variety of schools they go to. If you’re in Dayton, maybe not go to just Dayton Public, but also to Beavercreek and Centerville” (quoted in Lambert 2015).

Shifting the Focus of Financial Aid

Admissions can also be affected by shifting the focus of a college’s financial aid funds from assisting needy students to attracting better-prepared ones through so-called merit aid. A senior administrator at an Ohio community college explained how performance funding could encourage the college to offer scholarships to higher performing students who are more likely to complete:

My theory is that we’re going to be raising the bar for who we give some of our scholarships to. As I told the president, if it was my business I would be looking for ways to attract people that I thought were very likely to complete. And along with that, I would be looking for what are the tendencies or what are the attributes for those that tend to be non-completers. [End Page 164] Now I think that raises some ethical questions because we are an open-access institution, and so we still need to offer that access, but I think we also need to tweak and, again, encourage more completions as opposed to just numbers of enrollment.

Weakening Academic Standards

Fifty-five respondents at eight of nine community colleges and five of nine universities noted that performance funding could or did result in colleges lowering their academic standards in order to keep up their retention and graduation rates. Two-thirds of these reports involved potential impacts but one-third involved impacts that respondents stated had occurred. Our respondents observed that academic standards are or could be weakened principally by lessening academic demands in class or reducing degree requirements.

Lessening Class Demands

A senior campus administrator at an Indiana community college worried that the push for completions, the most heavily weighted metric within the Indiana performance-based funding formula, will force faculty and institutions to move students through to graduation without care for whether academic standards are maintained: “It’s putting faculty in a position of the easiest way out is to lower the standards and get people through. And so it’s something that’s of great concern I think.” Similarly, a faculty member at an Ohio university discussed a feeling of “pressure” not to fail students by inflating grades:

Well, in an effort to promote student success, there is a substantial pressure to minimize the failure rates of the students in some of these undergraduate courses. And of course that would translate into inflation of grades in order to make sure that the students are passing all of these courses and so forth. So I as a faculty member have a concern as to the watering down of our course materials as well as the quality of our majors, the programs.

Calling attention to courses with low completion rates can lead faculty to decrease their academic demands (and therefore to grade more easily) to achieve higher rates of course completion.

Reducing Degree Requirements

Several respondents noted that their respective institutions recently have changed degree requirements to ensure that students receive their degrees as soon as possible. Although removing unnecessary barriers to graduation may often be a good change, the focus on rapid credential attainment can also affect learning negatively. Degree requirements can be weakened by reducing the number of credits required to complete a degree and by having students take easier courses. In Tennessee, a college dean cited watering down of academic demands to achieve higher completion numbers as a potential unintended impact of performance funding:

The push is to get students to graduate, or at least the message that we get is [that] students have to graduate. There’s concern among faculty [that] that’s going to become the overriding goal and they’re going to be forced to water down the curriculum, which does not sit well with faculty on any level. . . . A number of the programs have [a] very set curriculum, and there seems to be a push to change that just so that you can get students to be able to graduate. In other words, to substitute courses that aren’t necessarily in the curriculum and that doesn’t always sit well [with faculty].

Many of our reports of unintended impacts involved potential impacts, that is, forecasts of what might happen, particularly if performance funding demands get more intense. These reports could simply be testimony more to our respondents’ fears than to their understanding of processes actually unfolding. Still, half of the impacts mentioned were ones we classified as observed, reports not of possible impacts but of ones that occurred. Furthermore, we have to keep in mind that our interviews occurred before Indiana, Tennessee, and especially Ohio had fully phased in their performance funding programs. Hence, we have to wonder how many of the potential unintended impacts mentioned might in time become [End Page 165] actual. Finally, even if we conclude that the potential unintended impacts will mostly remain only potential, they still testify to a widespread disquiet among higher education administrators and faculty that needs to be addressed by the advocates of performance funding.

The total number of reported unintended impacts varies across our three states, with Tennessee reporting the fewest and Ohio the most, and Indiana somewhere between. Again, a possible explanation for why Tennessee has the lowest number of reports is that of all three states it has had the longest history with performance funding. This may have allowed institutions more time to become used to performance funding and for the state to come up with solutions to unintended impacts that emerged. In addition, the high number of mentions in Ohio may in part be due to the fact that its program was extensively revised during our interviews there. The program may thus have weighed heavily on the minds of faculty and administrators, contributing to the higher number of unintended impacts reported.

SUMMARY AND CONCLUSIONS

We have analyzed the implementation and impacts of performance funding through the lens of three states regarded by many as leaders in that movement: Indiana, Ohio, and Tennessee. Based on extensive interviews with state officials and with staff of eighteen colleges and universities in those three states, we describe the policy instruments those states use to implement performance funding, the deliberative processes colleges use to devise their responses to performance funding, the impact of performance funding on institutional policies and programs and eventually on student outcomes, the obstacles institutions encountered in responding to performance funding demands, and the unintended impacts that ensued.

With regard to policy instruments, we find that states clearly deployed three: financial incentives, dissemination of information on the goals and intended methods of performance funding, and communication to institutions about their performance on the state metrics. Our respondents reported that these three instruments had a significant impact on institutional efforts to improve student outcomes. Although it is clear that the financial incentives were the most important policy instrument, it is also clear that the two informational policy instruments exerted an impact that supplemented and amplified the financial incentive. However, we saw little evidence of another possible instrument playing a significant role: building up the capacity of institutions to respond effectively to performance funding. For example, little evidence indicated any state efforts to enhance the capacity of institutions to collect and analyze data on student outcomes, to determine what might be the most effective solutions to improving those outcomes, to finance the implementation of those solutions, or to evaluate the effectiveness of those interventions. This absence contributes to an important obstacle encountered by colleges in responding to performance funding demands: insufficient organizational capacity.

In responding to performance funding, institutions drew on both general purpose deliberative structures rooted in their bureaucracy and more evanescent special purpose deliberative structures. The latter often arose to address other initiatives—such as accreditation association demands—the colleges were responding to, but they also played a major role in institutional response to performance funding.

Performance funding clearly spurred institutions to change their institutional policies and programs in order to improve student outcomes. However, many of our respondents found it difficult to gauge the relative importance of performance funding, given that it has been only one of several concurrent initiatives that states, accrediting associations, and policy groups have undertaken to improve student outcomes. Still, it appears that this joint influence produced synergy rather than interference, with responses to other external initiatives also facilitating college responses to performance funding. The two most commonly made campus-level academic changes following performance funding adoption have been to alter developmental (remedial) education and improve course articulation and transfer between community colleges and universities. [End Page 166] Meanwhile, the two most common student services changes have been to revamp advising and counseling services and to change tutoring and supplemental instruction.

Even if student outcomes improve after performance funding is introduced, these improvements could be tied to many other factors, such as rising enrollments, changes in state tuition and financial aid policies, initiatives by state governments, national policy groups, and accrediting associations to improve student outcomes, and institutional decisions to admit fewer at-risk students who are less likely to graduate. In Indiana, Ohio, and Tennessee, graduation numbers have increased at a greater rate than enrollments since the advent of their PF 2.0 programs. However, we cannot in any way conclude that performance funding in these three states is producing these better student outcomes because these figures do not control for a host of other possible causes. This caution is strongly reinforced by the fact that multivariate analyses of performance funding programs largely fail to find evidence that performance funding improves graduation or retention, although there is evidence of some interesting localized impacts. However, these multivariate studies primarily examined PF 1.0 programs. We need more multivariate analyses of the more intensive PF 2.0 programs in states such as Ohio and Tennessee before we can reach definitive conclusions about PF 2.0.

If the impact of performance funding on student outcomes is limited, it may be attributable in part to obstacles that institutions encounter in responding to PF demands. We find that institutions in our three states encounter several persistent obstacles. Our respondents most often pointed to the presence of many at-risk students (particularly in the case of community colleges and broad-access public universities), inappropriate performance funding metrics that did not align well with institutional missions and characteristics, and inadequate institutional capacity.

Our interviewees also frequently reported performance funding impacts not publicly intended by those who designed the policies. These negative unintended impacts are similar to those reported by studies of performance accountability in other public services (Grizzle 2002; Heckman et al. 2011; Heinrich and Marschke 2010; Moynihan 2008; Rothstein 2008a, 2008b). The most commonly mentioned unintended impacts were restrictions in admissions to college and weakening of academic standards. These impacts may be rooted in the obstacles colleges encounter in responding to performance funding. They may resort to actions that are socially harmful because they allow them to meet external demands placed on their organizations when socially legitimate means are proving inadequate (see Merton 1968, 1976).

Our findings have a number of implications for research. Clearly, we need more multivariate studies of the impact of performance funding. We do not have enough studies of PF 2.0 programs, particularly ones that have been operating for a number of years, are fully phased in, and involve a large share of state funding for higher education, as in Tennessee and Ohio. We also need more studies that examine PF impacts on two-year college outcomes. This multivariate research should examine not just whether a state has performance funding but also the features of that program: for example, how long it has been in place, what proportion of total institutional funding it affects, which particular performance metrics drive funding allocations, and what other state programs affecting student outcomes (such as initiatives to revamp developmental education or improve transfer pathways) are operating alongside PF. In doing this, researchers should keep in mind that features of a state’s performance funding program can vary significantly over time (see Dougherty and Natow 2015). Finally, new studies should examine PF impacts not just on student outcomes but also on intermediate institutional processes that may produce improvements in student outcomes, such as institutional changes in developmental education, student advising, or institutional research.

Our findings also have important implications for policymaking. To reduce unintended impacts of performance funding, policymakers need to protect academic standards and reduce the temptation to restrict admission of at-risk students. To protect academic standards, [End Page 167] states and institutions can assess student learning, collect data on changes in degree requirements and course grade distributions, and survey faculty members to find out whether they are feeling pressure to weaken academic standards. To reduce restriction of student admissions, states should provide incentives for admitting and graduating at-risk students and compare only institutions with similar missions and student composition (Dougherty and Reddy 2013; Dougherty et al., forthcoming; Jenkins and Shulock 2013; and Shulock and Jenkins 2011). These efforts would be enhanced by those to overcome the obstacles institutions encounter in responding effectively to performance funding and lead them to be tempted to use illegitimate methods to be successful. States should aid colleges with many at-risk students to better meet the needs of their students, create performance indicators and measures that better align with institutional missions, and act strongly to improve the capacity of colleges to engage in organizational learning (for more, see Dougherty et al., forthcoming).

This is a particularly important time to reflect on performance funding for higher education. It is now operating in over thirty states, with more in prospect, and it comes with great expectations that it will significantly improve student outcomes. It has seized the attention of college administrators and faculty and spurred—along with other policy initiatives—sizable changes in college academic and student-support policies, programs, and practices. At the same time, we do not have as yet conclusive evidence that performance funding does indeed improve student outcomes in any significant way. Moreover, we have evidence that it may produce troubling unintended impacts such as a weakening of academic standards and restrictions in the admission of less prepared and less advantaged students at a time of rising inequality in higher education. Clearly, performance funding deserves close attention both from policymakers and from researchers.

Kevin J. Dougherty

Kevin J. Dougherty is associate professor of education policy and senior research associate at the Community College Research Center at Teachers College, Columbia University.

Sosanya M. Jones

Sosanya M. Jones is assistant professor of qualitative research methods and higher education at Southern Illinois University at Carbondale.

Hana Lahr

Hana Lahr is a doctoral student in education policy and research associate at the Community College Research Center at Teachers College, Columbia University.

Rebecca S. Natow

Rebecca S. Natow is senior research associate at the Community College Research Center at Teachers College, Columbia University.

Lara Pheatt

Lara Pheatt is a doctoral student in politics and education and research associate at the Community College Research Center at Teachers College, Columbia University.

Vikash Reddy

Vikash Reddy is a doctoral student in politics and education and research associate at the Community College Research Center at Teachers College, Columbia University.

Kevin Dougherty at dougherty@tc.edu, Teachers College, Columbia University, Box 11, 525 W. 120th St., New York, NY 10027
Sosanya M. Jones at smjones@siu.edu
Rebecca S. Natow at rebeccanatow@yahoo.com
Lara Pheatt at lep2148@tc.columbia.edu
Vikash Reddy at vtr2107@tc.columbia.edu.

We wish to thank Lumina Foundation for its support for this research. The views expressed in this report are those of its authors and do not necessarily represent the views of Lumina Foundation, its officers or employees. We also wish to thank Ronald Abrams, Steven Brint, Charles Clotfelter, Kevin Corcoran, Russ Deaton, Alicia Dowd, William Doyle, Nicholas Hillman, Davis Jenkins, Alison Kadlec, Marcus Kolb, Vanessa Morest, John Muffo, Richard Petrick, Jeffrey Stanley, Susan Shelton, David Tandberg, Sean Tierney, and William Zumeta for their comments on earlier papers and reports that have fed into this paper. Any remaining errors are our own.

REFERENCES

Anderson, James E. 2014. Public Policymaking, 8th ed. Boston, Mass.: Wadsworth.
Argyris, Chris, and Donald A. Schön. 1996. Organizational Learning II: Theory, Methods, and Practice. Reading, Mass.: Addison-Wesley.
Baldwin, Christopher, Estela M. Bensimon, Alicia C. Dowd, and Lisa Kleiman. 2011. “Measuring Student Success.” New Directions for Community Colleges 153(Spring): 75–88.
Berkner, Lutz, and Susan Choy. 2008. Descriptive Summary of 2003–04 Beginning Postsecondary Students: Three Years Later. NCES 2008–174. Washington, D.C.: National Center for Education Statistics.
Boatman, Angela. 2012. “Evaluating Institutional Efforts to Streamline Postsecondary Remediation: The Causal Effects of the Tennessee Developmental Course Redesign Initiative on Early Student Academic Success.” NCPR working paper. New York: National Center for Postsecondary Research. Accessed February 23, 2016. http://www.postsecondaryresearch.org/i/a/document/22651_BoatmanTNFINAL.pdf.
Burke, Joseph C., ed. 2002. Funding Public Colleges and Universities: Popularity, Problems, and Prospects. Albany: State University of New York Press.
Burke, Joseph C., and Associates, eds. 2005. Achieving Accountability in Higher Education: Balancing Public, Academic, and Market Demands. San Francisco, Calif.: Jossey-Bass.
Complete College America. 2013. “The Game Changers: Are States Implementing the Best Reforms to Get More College Graduates?” Washington, D.C.: Complete College America.
Cragg, Michael. 1997. “Performance Incentives in the Public Sector: Evidence from the Job Training Partnership Act.” Journal of Law, Economics, and Organization 13(1): 147–68.
Dee, Thomas, and Brian Jacob. 2011. “The Impact of No Child Left Behind on Student Achievement.” Journal of Policy Analysis and Management 30(3): 418–46.
Deming, David J., Sarah Cohodes, Jennifer Jennings, and Christopher Jencks. 2013. “School Accountability, Postsecondary Attainment, and Earnings.” NBER working paper no. 19444. Cambridge, Mass.: National Bureau of Economic Research.
DiMaggio, Paul J., and Walter W. Powell. 1991. “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.” In The New Institutionalism in Organizational Analysis, edited by W. W. Powell and [End Page 168] P. J. DiMaggio. Chicago: University of Chicago Press.
Dougherty, Kevin J., Sosanya M. Jones, Hana Lahr, Rebecca S. Natow, Lara Pheatt, and Vikash Reddy. 2014a. “Envisioning Performance Funding Impacts: The Espoused Theories of Action for State Higher Education Performance Funding in Three States.” CCRC working paper no. 63. New York: Columbia University. Accessed February 23, 2016. http://ccrc.tc.columbia.edu/publications/envisioning-performance-funding-impacts.html.
———. 2014b. “Performance Funding for Higher Education: Forms, Origins, Impacts, and Futures.” Annals of the American Academy of Political and Social Science 655(1): 163–84.
———. Forthcoming. Performance Funding for Higher Education. Baltimore, Md.: Johns Hopkins University Press.
Dougherty, Kevin J., and Rebecca S. Natow. 2015. The Politics of Performance Funding for Higher Education: Origins, Discontinuations, and Transformations. Baltimore, Md.: Johns Hopkins University Press.
Dougherty, Kevin J., and Vikash Reddy. 2013. Performance Funding for Higher Education: What Are the Mechanisms? What Are the Impacts? ASHE Higher Education Report. San Francisco, Calif.: Jossey-Bass.
Dowd, Alicia C., and Vincent P. Tong. 2007. “Accountability, Assessment, and the Scholarship of ‘Best Practice.’” In Higher Education: Handbook of Theory and Research, vol. 22, edited by J. C. Smart. Dordrecht: Springer.
Erikson, Robert S., Gerald C. Wright, and John P. McIver. 2006. “Public Opinion in the States: A Quarter Century of Change and Stability.” In Public Opinion in State Politics, edited by Jeffrey E. Cohen. Stanford, Calif.: Stanford University Press.
Ewell, Peter T. 1999. “Linking Performance Measures to Resource Allocation: Exploring Unmapped Terrain.” Quality in Higher Education 5(3): 191–209.
Ferguson, Margaret. 2013. “Governors and the Executive Branch.” In Politics in the American States, 10th ed., edited by Virginia Gray, Russell L. Hanson, and Thad Kousser. Washington, D.C.: CQ Press.
Forsythe, Dall W., ed. 2001. Quicker, Better, Cheaper? Managing Performance in American Government. Albany, N.Y.: Rockefeller Institute Press.
Gray, Virginia, Russell Hanson, and Thad Kousser, eds. 2012. Politics in the American States: A Comparative Analysis, 10th ed. Washington, D.C.: CQ Press.
Grizzle, Gloria A. 2002. “Performance Measurement and Dysfunction: The Dark Side of Quantifying Work.” Public Performance and Management Review 25(4): 363–69.
Hamm, Keith E., and Gary F. Moncrief. 2013. “Legislative Politics in the States.” In Politics in the American States, 10th ed., edited by Virginia Gray, Russell Hanson, and Thad Kousser. Washington, D.C.: CQ Press.
Harnisch, Thomas L. 2011. “Performance-Based Funding: A Re-Emerging Strategy in Public Higher Education Financing.” A Higher Education Policy Brief. Washington, D.C.: American Association of State Colleges and Universities.
Heckman, James J., Carolyn J. Heinrich, Pascal Courty, Gerald Marschke, and Jeffrey Smith. 2011. The Performance of Performance Standards. Kalamazoo, Mich.: W. E. Upjohn Institute.
Heckman, James J., Carolyn J. Heinrich, and Jeffrey Smith. 2011. “Do Short-Run Performance Measures Predict Long-Run Impacts?” In The Performance of Performance Standards, edited by James J. Heckman et al. Kalamazoo, Mich.: W. E. Upjohn Institute.
Heinrich, Carolyn J., and Gerald Marschke. 2010. “Incentives and Their Dynamics in Public Sector Performance Management Systems.” Journal of Policy Analysis and Management 29(1): 183–208.
Hillman, Nicholas W., Alisa F. Fryar, David A. Tandberg, and Valerie Crespin-Trujillo. 2015. “Evaluating the Efficacy of Performance Funding in Three States: Tennessee, Ohio, and Indiana.” Unpublished paper. University of Wisconsin, Madison.
Hillman, Nicholas W., David A. Tandberg, and Alisa H. Fryar. 2015. “Evaluating the Impacts of ‘New’ Performance Funding in Higher Education.” Educational Evaluation and Policy Analysis. doi: 10.3102/0162373714560224.
Hillman, Nicholas W, David A. Tandberg, and Jacob P. K. Gross. 2014. “Performance Funding in Higher Education: Do Financial Incentives Impact College Completions?” Journal of Higher Education 85(6): 826–57.
Holbrook, Thomas M., and Raymond J. La Raja. 2013. “Parties and Elections.” In Politics in the American States, 10th ed., edited by Virginia Gray, Russell L. Hanson, and Thad Kousser. Washington, D.C.: CQ Press. [End Page 169]
Honig, Meredith I. 2006. “Complexity and Policy Implementation: Challenges and Opportunities for the Field.” In New Directions in Education Policy Implementation: Confronting Complexity, edited by Meredith I. Honig. Albany: State University of New York Press.
Huang, Y. 2010. “Performance Funding and Retention Rates.” Unpublished paper. Michigan State University.
Ivy Tech Community College. 2014. “The Co-Requisite Initiative: An Initial Assessment of Its Impact at Ivy Tech Community College—Central Indiana Region.” Presentation, March 25. Indianapolis, Ind.: Ivy Tech Community College. Accessed February 23, 2016. https://s3.amazonaws.com/jngi_pub/gce14/Co-Requisite+Initiative.pdf.
Jenkins, Davis. 2011. “Redesigning Community Colleges for Completion: Lessons from Research on High-Performance Organizations.” CCRC working paper no. 24. New York: Columbia University.
Jenkins, Davis, and Nancy Shulock. 2013. “Metrics, Dollars, and Systems Change: Learning from Washington’s Student Achievement Initiative to Design Effective Postsecondary Performance Funding Policies.” State Policy Brief. New York: Columbia University, Teachers College, Community College Research Center.
Jenkins, Davis, John Wachen, Colleen Moore, and Nancy Shulock. 2012. “Washington State Student Achievement Initiative Policy Study: Final Report.” New York: Columbia University, Teachers College, Community College Research Center.
Jones, Dennis P. 2013. “Outcomes-Based Funding: The Wave of Implementation.” Indianapolis, Ind.: Complete College America. Accessed February 23, 2016. http://www.completecollege.org/pdfs/Outcomes-Based-Funding-Report-Final.pdf.
Jones, Sosanya M., Kevin J. Dougherty, Jana Lahr, Rebecca S. Natow, Lara Pheatt, and Vikash Reddy. 2015. “Organizational Learning by Colleges Responding to Performance Funding: Deliberative Structures and Their Challenges.” CCRC working paper no. 79. New York: Columbia University.
Karen, David, and Kevin J. Dougherty. 2005. “Necessary But Not Sufficient: Higher Education as a Strategy of Social Mobility.” In Higher Education and the Color Line, edited by Gary Orfield, Patricia Marin, and Catherine Horn. Cambridge, Mass.: Harvard Education Press.
Kerrigan, Monica R. 2010. “Data-Driven Decision Making in Community Colleges: New Technical Requirements for Institutional Organizations.” EdD diss., Columbia University, Teachers College, New York.
Kezar, Adrianna. 2005. “What Campuses Need to Know About Organizational Learning and the Learning Organization.” New Directions for Higher Education 131(Autumn): 7–22.
———. 2012. “Organizational Change in a Global, Postmodern World.” In The Organization of Higher Education: Managing Colleges for a New Era, edited by M. Bastedo. Baltimore, Md.: Johns Hopkins University Press.
Kezar, Adrianna, William J. Glenn, Jaime Lester, and Jonathan Nakamoto. 2008. “Examining Organizational Contextual Features That Affect Implementation of Equity Initiatives.” Journal of Higher Education 79(2): 125–59.
Lahr, Hana, Lara Pheatt, Kevin J. Dougherty, Sosonya M. Jones, Rebecca S. Natow, and Vikash Reddy. 2014. “Unintended Impacts of Performance Funding on Community Colleges and Universities in Three States.” CCRC working paper no. 78. New York: Columbia University.
Lake, Tim, Chris Kvam, and Marsha Gold. 2005. “Literature Review: Using Quality Information for Health Care Decisions and Quality Improvement.” Cambridge, Mass.: Mathematica Policy Research.
Lambert, Lance. 2015. “State Funding Pushes Up College Standards: Ohio’s New Funding Formula Puts a Premium on ‘College-Ready’ High School Graduates.” Dayton Daily News, August 22.
Lane, Jason E., and Jussi A. Kivisto. 2008. “Interests, Information, and Incentives in Higher Education: Principal-Agent Theory and Its Potential Applications to the Study of Higher Education Governance.” In Higher Education: Handbook of Theory and Research, edited by J. C. Smart. New York: Springer.
Larocca, Roger, and Douglas Carr. 2012. “Higher Education Performance Funding: Identifying Impacts of Formula Characteristics on Graduation and Retention Rates.” Paper presented to the Western Social Science Association Annual Conference. Oakland, Mich.: Oakland University.
Lumina Foundation. 2009. “Four Steps to Finishing First: An Agenda for Increasing College Productivity [End Page 170] to Create a Better-Educated Society.” Indianapolis, Ind.: Lumina Foundation. Accessed February 23, 2016. http://www.luminafoundation.org/publications/Four_Steps_to_Finishing_First_in_Higher_Education.pdf.
Marsh, Julie A., Matthew G. Springer, Daniel F. McCaffrey, Kun Yuan, Scott Epstein, Julia Koppich, Nidhi Kalra, Catherine DiMartino, and Art Peng. 2011. A Big Apple for Educators: New York City’s Experiment with Schoolwide Performance Bonuses. Santa Monica, Calif.: RAND Corp.
Massy, William F. 2011. “Managerial and Political Strategies for Handling Accountability.” In Accountability in Higher Education, edited by B. Stensaker and L. Harvey. New York: Routledge.
McDonnell, Lorraine M., and Richard F. Elmore. 1987. “Getting the Job Done: Alternative Policy Instruments.” Educational Evaluation and Policy Analysis 9(2): 133–52.
McGuinness, Aims C., Jr. 2003. “Models of Postsecondary Education Coordination and Governance in the States.” StateNote Report. Denver, Colo.: Education Commission of the States.
Merton, Roberty K. 1968. Social Theory and Social Structure, revised and enlarged ed. New York: Free Press.
———. 1976. Sociological Ambivalence and Other Essays. New York: Free Press.
Mettler, Suzanne. 2014. Degrees of Inequality: How the Politics of Higher Education Sabotaged the American Dream. New York: Basic Books.
Mica, Adrianna, Arkadiusz Peisert, and Jan Winczorek, eds. 2012. Sociology and the Unintended. New York: Peter Lang.
Moynihan, Daniel P. 2008. The Dynamics of Performance Management: Constructing Information and Reform. Washington, D.C.: Georgetown University Press.
National Conference of State Legislatures. 2015. “Performance-Based Funding for Higher Education.” Accessed December 16, 2015. http://www.ncsl.org/research/education/performance-funding.aspx.
Natow, Rebecca S., Lara Pheatt, Kevin J. Dougherty, Sosanya M. Jones, Hana Lahr, and Vikash Reddy. 2014. “Institutional Changes to Organizational Policies, Practices, and Programs Following the Adoption of State-Level Performance Funding Policies.” CCRC working paper no. 76. New York: Columbia University.
Nodine, Thad, Andrea Venezia, and Kathy Bracco. 2011. “Changing Course: A Guide to Increasing Student Completion in Community Colleges.” San Francisco: WestEd. Accessed February 23, 2016. http://knowledgecenter.completionbydesign.org/sites/default/files/changing_course_V1_fb_10032011.pdf.
Ohio Board of Regents. 2013. “Draft State Share of Instruction FY2014 with FY2013 Actuals.” Columbus.: Ohio Board of Regents.
Pfeffer, Jeffrey, and Gerald Salancik. 1978. The External Control of Organizations. New York: Harper & Row.
Pheatt, Lara, Hana Lahr, Kevin J. Dougherty, Sosanya M. Jones, Rebecca S. Natow, and Vikash Reddy. 2014. “Obstacles to the Effective Implementation of Performance Funding: A Multi-State Cross-Case Analysis.” CCRC working paper no. 77. New York: Columbia University.
Postsecondary Analytics. 2013. What’s Working? Outcomes-Based Funding in Tennessee. Washington, D.C.: HCM Associates.
Quint, Janet C., Shanna S. Jaggars, D. Crystal Byndloss, and Asya Magazinnik. 2013. Bringing Developmental Education to Scale: Lessons from the Developmental Education Initiative. New York: MDRC. Accessed February 23, 2016. http://www.mdrc.org/sites/default/files/Bringingpercent20Developmentalpercent20Educationpercent20topercent20Scalepercent20FR.pdf.
Radin, Beryl A. 2006. Challenging the Performance Movement: Accountability, Complexity, and Democratic Values. Washington, D.C.: Georgetown University Press.
Reddy, Vikash, Hana Lahr, Kevin J. Dougherty, Sosanya M. Jones, Rebecca S. Natow, and Lara Pheatt. 2014. “Policy Instruments in Service of Performance Funding: A Study of Performance Funding in Three States.” CCRC working paper no. 75. New York: Columbia University.
Reindl, Travis, and Dennis P. Jones. 2012. “Raising the Bar: Strategies for Increasing Postsecondary Educational Attainment with Limited Resources.” Presentation to the NGA National Summit on State Government Redesign. Washington, D.C. (December 5, 2012).
Reindl, Travis, and Ryan Reyna. 2011. “From Information to Action: Revamping Higher Education Accountability Systems.” Washington, D.C.: National Governor’s Association. Accessed February 23, 2016. http://www.nga.org/files/live/sites/NGA/files/pdf/1107C2Calif.CTIONGUIDE.PDF. [End Page 171]
Rothstein, Richard. 2008a. Grading Education: Getting Accountability Right. New York: Teachers College Press.
———. 2008b. “Holding Accountability to Account: How Scholarship and Experience in Other Fields Inform Exploration of Performance Incentives in Education.” Working paper no. 2008–04. Washington, D.C.: Economic Policy Institute. Accessed February 23, 2016. http://www.epi.org/publication/wp_accountability/.
Rouse, Cecilia E., Jane Hannaway, Dan Goldhaber, and David Figlio. 2007. “Feeling the Florida Heat? How Low-Performing Schools Respond to Voucher and Accountability Pressure.” Washington, D.C.: Urban Institute.
Rutherford, Amanda, and Thomas Rabovsky. 2014. “Evaluating Impacts of Performance Funding Policies on Student Outcomes in Higher Education.” The Annals of the American Academy of Political and Social Science 655(1): 185–206.
Rutschow, Elizabeth Z., Lashawn Richburg-Hayes, Thomas Brock. Genevieve Orr, Oscar Cerna, Dan Cullinan, Monica R. Kerrigan, Davis Jenkins, Susan Gooden, and Kasey Martin. 2011. Turning the Tide: Five Years of Achieving the Dream in Community Colleges. New York: MDRC.
Sanford, Thomas, and James M. Hunter. 2011. “Impact of Performance Funding on Retention and Graduation Rates.” Educational Policy Analysis Archives 19(33): 1–30.
Shin, Jung-Cheol. 2010. “Impacts of Performance-Based Accountability on Institutional Performance in the U.S.” Higher Education 60(1): 47– 68.
Shin, Jung-Cheol, and Sande Milton. 2004. “The Effects of Performance Budgeting and Funding Programs on Graduation Rate in Public Four-Year Colleges and Universities.” Education Policy Analysis Archives 12(22): 1–26.
Shulock, Nancy, and Davis Jenkins. 2011. “Performance Incentives to Improve Community College Completion: Learning From Washington State’s Student Achievement Initiative.” A State Policy Brief. New York: Columbia University, Teachers College, Community College Research Center. Accessed February 23, 2016. http://ccrc.tc.columbia.edu/publications/performance-incentives-college-completion.html.
Snyder, Martha J. 2011. “Role of Performance Funding in Higher Education’s Reform Agenda: A Glance at Some State Trends.” Presentation given at the Annual Legislative Institute on Higher Education, National Conference of State Legislatures. Denver, Colo. (October 2011).
———. 2015. “Driving Better Outcomes: Typology and Principles to Inform Outcomes-Based Funding Models.” Washington, D.C.: HCM Strategists.
State of Tennessee. 2010. “Complete College Tennessee Act of 2010.” Tenn. Stat. 2010. Nashville: Tennessee Higher Education Commission. Accessed December 16, 2015. http://www.tn.gov/thec/topic/complete-college-tn-act.
Stecher, Brian, and Sheila N. Kirby, eds. 2004. Organizational Improvement and Accountability: Lessons for Education from Other Sectors. Santa Monica, Calif.: RAND Corp.
Tandberg, David A, and Nicholas W. Hillman. 2014. “State Higher Education Performance Funding: Data, Outcomes, and Causal Relationships.” Journal of Education Finance 39(3): 222–43.
Tandberg, David A., Nicholas W. Hillman, and Mohamed Barakat. 2014. “State Higher Education Performance Funding for Community Colleges: Diverse Effects and Policy Implications.” Teachers College Record 116(12): 1–31.
Tennessee Higher Education Commission. 2011a. “Outcomes Formula Technical Details.” Presentation. Nashville (May 17, 2011). Accessed December 16, 2015. http://slideplayer.com/slide/4021430/.
———. 2011b. “Outcomes Based Funding Formula.” Accessed December 16, 2015. http://tn.gov/thec/article/2010-2015-funding-formula.
Umbricht, Mark R., Frank Fernandez, and Justin C. Ortagus. 2015. “An Examination of the (Un)Intended Consequences of Performance Funding in Higher Education.” Educational Policy: 1—31. doi: 10.1177/0895904815614398.
U.S. Census Bureau. 2012. Statistical Abstract of the United States, 2012. Washington, D.C.: Government Printing Office. Accessed February 23, 2016. https://www.census.gov/library/publications/2011/compendia/statab/131ed.html.
Wells, Susan J., and Michelle Johnson-Motoyama. 2001. “Selecting Outcome Measures for Child Welfare Settings: Lessons for Use in Performance Management.” Children and Youth Services Review 23(2): 169–99.
Witham, Keith A., and Estela M. Bensimon. 2012. “Creating a Culture of Inquiry Around Equity and [End Page 172] Student Success.” In Creating Campus Cultures: Fostering Success Among Racially Diverse Student Populations, edited by Samuel D. Museus and Uma M. Jayakumar. New York: Routledge.
Zumeta, William, and Alicia Kinne. 2011. “Accountability Policies: Directions Old and New.” In The States and Public Higher Education Policy: Affordability, Access, and Accountability, 2nd ed., edited by Donald E. Heller. Baltimore, Md.: Johns Hopkins University Press. [End Page 173]

Footnotes

1. Unlike the other two states, Tennessee did not discontinue its earlier PF 1.0 program. It now operates both types of programs.

2. The Ivy Tech system in Indiana operates as a single community college, with the separate campuses reporting to a Central Office. Only one public two-year college—Vincennes University—is not part of the Ivy Tech system.

3. Several factors mitigated against a big financial impact: the use of three-year rolling averages rather than annual statistics; hold-harmless provisions in the first few years of the programs that limited their impact; the declining state share of total institutional revenues and concomitant rise in the tuition share of revenues; and—in Indiana and in Ohio for community colleges until recently—the small proportion of state funding driven by performance indicators (for more detail, see Reddy et al. 2014).

4. This represented 56 percent of our institutional respondents. This number was kept down in good part by the fact that we did not begin asking this question until after our first round of interviews in Ohio and Tennessee.

5. Tandberg, Hillman, and Barakat (2014) also include states with state coordinating or planning boards as a comparison group.

6. For the Tandberg and colleagues (2014) study, the higher education system control variables include include percentage of students enrolled in the community college sector, in-state tuition at public two-year and four-year colleges, state aid per public FTE, and state appropriations per public FTE and the socioeconomic controls included state population size, poverty rate, and unemployment rate. For the Hillman and colleagues (2015) study, the higher education institution controls included percentage enrolled part-time, percentage white, percentage of revenues from state appropriations, tuition and fees, and federal and state grant aid per FTE, whereas the socioeconomic control variables were size of county labor force and county unemployment rate.

7. When the authors examine performance on outcomes year by year, significant impacts begin appearing two to three years after the state PF 2.0 programs were established, particularly in Indiana. This raises the possibility that performance funding may have lagged effects.

8. However, we should add that those outcomes—though unintended by policy designers—may actually be intended by institutional actors. They may be quite happy to make their institutions more selective, even if this is not the intent of the state performance funding program. We wish to thank Dr. Tiffany Jones of the Southern Education Foundation for her recommendation that we clarify what is unintended and intended in the impacts of performance funding.

Share