Johns Hopkins University Press
Abstract

In 2010, Tennessee enacted the Complete College Tennessee Act (CCTA), which increased the proportion of state performance-based funding from 5.45% to 85% and added a 40% funding premium for progression and [End Page 295] degree completions by adult students and low-income students. We collect data from 2001-02 to 2014-15 and use difference-in-differences estimation to compare a variety of counterfactual scenarios and examine the impacts of the CCTA. In response to the CCTA, community colleges in Tennessee saw, on average, a 10% to 18% decline in the number of adult students and a 17% to 31% increase in the number of low-income students. Additionally, the CCTA produced no changes in associate degree completions yet significant increases in short-term certificates (192% to 249%) and medium-term certificates (129% to 144%).

Keywords

higher education finance, performance funding, community college, policy analysis, resource dependence

Due to national concerns over college completion rates and the increasing price of higher education, a growing number of states have implemented performance funding (PF). PF is defined as a higher education policy that directly ties state appropriations to the student outcome metrics of public colleges, such as retention rates, degrees awarded, and credits completed (Burke & Minassians, 2003). In the 1990s, numerous states adopted PF, also known as performance-based funding (PBF) or outcomes-based funding (OBF), yet many of these policy adoptions were discontinued due to economic downturns, funding instability, and political resistance (Dougherty, Natow, Hare, Jones, & Vega, 2011).

The renewed interest in PF in higher education has been fueled by political pressures and contentions that public colleges have failed to retain and graduate students, suggesting that reforms and oversight are necessary to increase accountability and ensure that public tax dollars are not being wasted. As of 2018, 29 states were operating a PF policy, with 4 more states developing a policy (Li, 2018), although some sources suggest as many as 46 states operating or considering a PF policy (Gándara & Rutherford, 2018). Some states allocating additional funding for completions by underrepresented groups, such as students of color, Pell Grant recipients, first-generation students, and adult students.

Established in 1979, Tennessee has the longest-running PF policy in the United States (Banta, Rudolph, Dyke, & Fisher, 1996). Tennessee made significant changes and adopted a new PF policy when the state passed the Complete College Tennessee Act (CCTA) in 2010. First, the CCTA increased PF from 5.45% to 85% of total higher education funding in the state (Tennessee Higher Education Commission, 2015a), which represents one of the strongest commitments to PF among all participating states (National Conference of State Legislatures, 2015). Tennessee's PF formula allocated appropriations for the total number of completions of associate degrees, short-term certificates (requiring less than one year of study; less than 24 [End Page 296] credit hours), and medium-term certificates (requiring between one and two years of study; 24 credit hours or more). Second, the CCTA introduced a 40% premium for adult students (aged 25 and older) and Pell-eligible students who meet retention and credential completion outcomes (Tennessee Higher Education Commission, 2015a).

Scholars have shown that the adoption of PF may lead to various unintended consequences, such as restricting admissions for students who are more likely to struggle academically (Kelchen & Stedrak, 2016; Umbricht, Fernandez, & Ortagus, 2017). As open-access institutions, community colleges are not able to restrict student admissions, but they may change their behavior in other strategic ways to optimize the likelihood of improving outcomes and receiving performance-based state appropriations. Dougherty and Reddy (2011) noted that community colleges may limit their enrollment numbers of disadvantaged students by restricting outreach efforts to high schools with high numbers of low-income students and limiting course offerings for developmental or adult education. Given these dynamics and the major changes introduced by the Complete College Tennessee Act, we posed the following research questions:

  1. 1. Did the Complete College Tennessee Act influence the enrollment of adult students and the enrollment of low-income students?

  2. 2. Did the Complete College Tennessee Act influence the number of completions of associate degrees, short-term certificates, and medium-term certificates?

Literature Review

Policy Context: Performance Funding in Tennessee

Between 1979 and 2011, Tennessee went through eight different PF policy revisions (Sanford & Hunter, 2011). During those policy revisions, the proportion of institutions' budgets derived from their performance increased from 2% to 5% to 5.45% to 85%, as bonus funding changed to base funding and various student outcome metrics were modified, added, and discontinued (Banta et al., 1996; Callahan et al., 2017; Sanford & Hunter, 2011). Analyzing data from 1995-2009, Sanford and Hunter (2011) concluded that the 1997 addition of retention rates and six-year graduation rates as metrics did not create any changes in these outcomes at four-year colleges. The authors suggested that 5.45% of the funding allocation was not enough to catalyze the institutional changes necessary to improve retention and completion.

In 2010, Tennessee enacted a significant policy change by passing the Complete College Tennessee Act (CCTA). According to the Tennessee Higher Education Commission (2015b), the outcomes-based formula introduced by the CCTA rewards institutions "for the production of outcomes that further [End Page 297] the educational attainment and productivity goals of the state Master Plan" (p. 1). This Act shifted the 5.45% outcomes-based funding level to 85% outcomes-based funding of the state's unrestricted appropriations, with the remaining 15% for operations and maintenance (Sanford & Hunter, 2011). Tennessee's model, which remained active from 2010-2015, represented a radically new approach to funding public institutions due to its substantial commitment to PF (Ness, Deupree, & Gándara, 2015). Tennessee's model was implemented over a three-year period to ensure funding stability (Callahan et al., 2017). The model applied to allocations of higher education appropriations starting in 2011-12 (Dougherty & Reddy, 2013). Ten percent of the outcomes-based portion of state appropriations is based on course completions, 65% is based on progression and degree completion metrics, and the remaining percentage is related to factors associated with institutional mission (Tennessee Higher Education Commission, 2015a).

There are two unique features of the 2010-2015 Tennessee PF model that directly motivated our study. First, the CCTA provides a 40% premium as part of the base appropriations for the number of adult students (aged 25 years or older) and Pell Grant recipients who reach any of the following milestone metrics: completing 12, 24, or 36 semester credit hours, completing an associate degree, or completing a certificate (Ness et al., 2015; Tennessee Higher Education Commission, 2015a). These premiums are designed to encourage colleges to retain historically disadvantaged students who may be at greater risk of dropping out by providing concentrated resources. Second, a defining feature of Tennessee's latest formula is that it differentiates between the number of completions of associate degrees, certificates requiring one to two years (medium-term certificates), and certificates requiring less than one year (short-term certificates), and equally rewards completions at all three levels of credentials. There are additional outcomes incentivized in the funding formula, such as the number of students who transfer after earning 12 credits and the number of students taking dual enrollment courses. For the present study, we focus on the number and proportion of adult and low-income students and the completion of certificates and associate degrees.

Tennessee revised its PF formula again for 2015-2020 (Tennessee Higher Education Commission, 2015c), including the elimination of outcomes for short- and medium-term certificates, so we purposefully excluded data following 2014-15.

Performance Funding Impacts

Existing research on PF has examined two broad areas: impacts on institutional behaviors (typically qualitative work) and impacts on retention and degree completion (multivariate quantitative analyses) (Dougherty et al., 2016b). Dougherty et al. (2014) found that PF has contributed to greater data usage and improvements in academic and student support services. PF [End Page 298] is also related to colleges' efforts to increase their orientation and first-year programming (Natow et al., 2014), streamline course articulation and transfer (Natow et al., 2014), and increase student advising and tutoring services (Dougherty et al., 2014, 2016a). At community colleges, PF has been one factor in drawing more focus to student success and colleges engaging in new practices to facilitate students' transitions from developmental education to credit-bearing courses (Jenkins, Wachen, Moore, & Shulock, 2012). Nevertheless, researchers still caution that campus changes may not necessarily be attributed to PF and instead may have still occurred in the absence of the policy (Dougherty et al., 2016a).

Despite these institutional responses, research suggests that PF has yet to produce positive impacts on two-year degree completions when taken in aggregate. Using a national dataset, Li and Kennedy (2018) found that stronger PF policies (such as those in Ohio) resulted in a decrease in associate degrees. Another national study found that PF did not increase the number of associate degree completions, and that when individual state effects were analyzed, PF policies resulted in lower completions in some states yet higher completions in other states (Tandberg, Hillman, & Barakat, 2014). In a study of Washington state, Hillman, Tandberg, and Fryar (2015) found that community colleges improved neither retention rates nor associate degree completions. A recent study of Tennessee found that Tennessee community colleges conferred fewer associate degree than community colleges in other PF states (Hillman, Hicklin Fryar, Crespín-Trujillo, 2018).

Prior research also suggests that PF increases the number of short-term certificates awarded (Hillman et al., 2015; Li & Kennedy, 2018). In a recent study, Hillman et al. (2018) found that certificates increased in Tennessee after the CCTA, although the authors did not distinguish between short- and medium-term certificates or consider the influence of funding premiums on underserved student enrollment. Increased emphasis on short-term certificates is particularly problematic because research has shown that the average wage increase for students who earn a certificate is typically lower when compared to those who earn a longer-term certificate or an associate degree (Dadgar & Trimble, 2015; Jepsen, Troske, & Coomes, 2014; Marcotte, Bailey, Borkoski, & Kienzl, 2005). Labor market returns for short-term certificate holders are nearly identical to labor market returns for individuals who only obtain a high school diploma (e.g., Dadgar & Trimble, 2015). A recent study offers further insight regarding the importance of differentiating between certificates, finding that colleges respond to stronger PF policies by increasing short-term certificates but not medium-term certificates (Li & Kennedy, 2018). [End Page 299]

Unintended Consequences and Weighted Metrics

PF has also been connected to several unintended consequences, such as the restriction of access for historically underrepresented students (Kelchen & Stedrak, 2016; Umbricht et al., 2017; Zumeta & Li, 2016). One of the easiest ways for a college to boost its retention and completion numbers is to recruit and admit the types of students who are most likely to graduate, which may lead to restricting entry among less academically prepared students (measured by high school GPA and standardized test scores). The students who may no longer meet admissions criteria due to greater institutional selectivity are more likely to be from racial/ethnic minority groups or low-income families (Gladieux & Swail, 1999). In the presence of PF, institutions have been found to restrict admissions, which disproportionately affects minority students (Lahr et al., 2014). Umbricht et al. (2017) examined the impact of PF in Indiana and found that four-year institutions subject to PF experienced decreases in acceptance rates, increases in students' average incoming ACT score, and decreases in the proportion of minority and low-income students. Kelchen and Stedrak (2016) found that two-year and four-year institutions subject to PF received less Pell Grant revenue, suggesting that institutions strategically recruited higher-income students.

On the other hand, PF may cause institutions to be more inclined to enroll historically disadvantaged students. Li and Zumeta (2016) found that in Ohio and Pennsylvania, policymakers suggested that four-year institutions could find ways to enroll the "most financially profitable student" (p. 16), specifically, students who were adult students, receiving Pell Grants, and/or from underrepresented racial/ethnic backgrounds. Focusing on a community college district in Texas, McKinney and Hagedorn (2017) reported that the average amount of performance-based funds accrued varied based on a community college's student characteristics. Asian students earned significantly more funds for their college relative to African American, Hispanic, and white students, younger students (19 years or younger) earned more relative to older students (20 to 24, and 25 or above), and Pell Grant recipients earned more than non-Pell Grant recipients.

Policymakers, including those in Tennessee, have reacted to unintended consequences of increased selectivity by incorporating weighted metrics (premiums) for certain student demographics shown to experience achievement gaps. A recent national study of four-year institutions has shown that premiums in PF policies are associated with increases in the proportion of low-income and Hispanic students and declines in the proportion of Black students (Gándara & Rutherford, 2018). In a different study of four-year institutions, premiums had no effect on the proportion of low-income students nor adult students age 25 or older, yet premiums increased the proportion of Black students and decreased the proportion of students age 24 and younger [End Page 300] (Kelchen, 2018). Existing research on premiums focuses exclusively on four-year institutions and utilizes national data, which masks variation in policy designs across individual states.

No research exists on whether the inclusion of PF weighted metrics for community colleges counteracts the potential tendency of colleges to restrict admissions. As open-access institutions, community colleges are unable to restrict admissions in the same way as selective institutions. Yet in the context of PF, the financial rewards associated with improved completion rates may incentivize community colleges to diminish practices associated with their traditional mission of serving disadvantaged students, leading to a decreased emphasis on recruitment in impoverished areas and the provision of developmental education. On the contrary, the inclusion of weighted metrics for disadvantaged student subpopulations may incentivize colleges to seek out students with characteristics that earn a premium in a PF policy. Thus, the purpose of this study is to explore whether community colleges have responded to the incentives outlined within the CCTA by increasing credential completions and the enrollment of low-income and adult students.

Conceptual Framework

The conceptual framework of this study is guided by the concept of a theory of action (Argyris & Schon, 1996) rooted in a resource dependence perspective (Pfeffer & Salancik, 1978). A theory of action is akin to a policy instrument to the extent that both mechanisms are designed to turn a policy goal into a concrete action (Dougherty & Reddy, 2013; McDonnell & Elmore, 1987). For states that implement PF, the policy goal is that colleges will improve their performance due to the provision of material incentives (Burke, 2002; Dougherty & Hong, 2006; Dougherty & Reddy, 2013). A theory of action associated with material incentives contends that colleges will seek to maximize their revenue by making a concentrated effort to improve their performance as long as the amount of the material incentive (e.g., state appropriations) is considered worthwhile (Burke, 2002). Because this particular theory of action is rooted in a resource dependence perspective (Pfeffer & Salancik, 1978), colleges' behaviors are strongly influenced by the degree to which they rely on resources from the external environment.

External resource providers can influence organizational behavior when the supplied resource is critical and not easily obtained from another source (Emerson, 1962). For public community colleges that rely heavily on state appropriations, a theory of action rooted in a resource dependence perspective suggests that PF may cause institutions to alter their behavior to ensure that they maintain or increase their share of a critical funding source (Harnisch, 2011). Rabovsky (2012) also noted that shifts in the state's resource allocation [End Page 301] methods would lead to institutions adopting new strategies to improve institutional performance according to the funding criteria. However, performance management literature warns that the quickest, cheapest, and simplest responses to PF are more likely than the intended results following the implementation of performance-based incentives (Andrews & Moynihan, 2002; Hillman, 2016).

Community colleges are more dependent on public funding than four-year institutions and may be especially susceptible to engaging in actions expected under a resource dependence perspective. Four-year institutions, particularly research universities, are positioned to secure funding from federal research grants, generate revenue through auxiliaries (e.g., housing, parking, dining services) and athletics, and recruit more out-of-state and international students who pay higher tuition (Hearn, 2006). State and local appropriations may consist of less than 30% of four-year institutions' total revenue, but community colleges receive much of their revenue from public sources, with an average of 48% of their total revenue coming from state and local appropriations (American Association of Community Colleges, 2016).

Community colleges in Tennessee are uniquely positioned to respond to CCTA's design since they offer shorter-term programs that are more narrowly tailored to the vocational or transfer needs of their student populations. However, those same colleges may face capacity constraints that force them to focus more on their immediate responses to PF as opposed to longer-term institutional changes (Dougherty & Reddy, 2013). Due to these dynamics, we hypothesize that Tennessee community colleges will respond to the CCTA by (1) increasing the number and proportion of adult and low-income students and (2) increasing the number of short- and medium-term certificates offered. However, we also anticipate that institutional efforts will be concentrated specifically on strategies to increase public funds in the short-term, leading to no observable increases in associate degrees.

Data

We constructed a panel dataset using college- and county-level data across academic years 2001-02 to 2014-15. Data were collected from the Integrated Postsecondary Education Data System (IPEDS), the College Scorecard, and the Census Bureau. IPEDS is an annual survey of U.S. colleges and universities that provide federal financial aid to their students and reports variables related to each institution's characteristics, financial information, enrollment, and graduation rates. The College Scorecard reports institutional-level data on college costs, financial aid and student loan debt, and student body characteristics. To be included in our sample, community and technical colleges must have been reported as a "degree-granting" institution in IPEDS with a "public" control (excluding private non-profit and for-profit colleges). [End Page 302] Colleges were characterized as having a level of instruction of "two to four years" (excluding four years, and less than two years), with the highest degree offered being "associate degree" (as opposed to bachelor's, master's, or doctorate degrees). We excluded colleges that opened or closed during our years of observation, and our control groups remain stable across time.

Outcome Variables

We analyzed the total number and proportion of two student subpopulations (adult and low-income students). Credential completions by students within these two subpopulations earn institutions a 40% premium, according to Tennessee's performance funding formula. Specifically, we examined the following outcomes:

  • • Each college's student subpopulation classified as adult, defined as the fall enrollment of students aged 25 years and older:

    • • The number of adult students (logged)

    • • The percent of adult students

  • • Each college's student subpopulation classified as low-income, defined in our study as the percent of full-time, first-time degree-seeking college students receiving federal grant aid:

    • • The number of low-income students (logged)

    • • The percent of low-income students

We examined the number and percent of students to capture raw changes and the relative share of the identified student subpopulations. In IPEDS, the federal grant aid variable consists of the following sources of aid: Title IV Pell Grants, Supplemental Educational Opportunity Grants (SEOG), need- and merit-based educational assistance funds, and training vouchers allocated from other federal agencies (e.g. Veteran's Administration, Department of Labor). Although the IPEDS variable for Pell Grant aid may have been a more accurate representation of low-income students, it was not reported before 2008-09. Thus, as a proxy for low-income students, we used the percent of full-time, first-time degree-seeking undergraduate students awarded federal grants; this variable was highly correlated with the percent of full-time, first-time degree-seeking undergraduate students awarded Pell Grants for the available years of data (R = 0.73).

As noted earlier, Tennessee's funding formula also allocates appropriations based on the raw count of each of the following levels of credentials:

  • • Associate degrees

  • • Certificates requiring fewer than 24 credit hours (short-term certificates)

  • • Certificates requiring 24 or more credit hours (medium-term certificates)

We analyzed each of the aforementioned credentials as outcome variables (logged) and deliberately chose to examine the raw number of credentials awarded rather than a per-FTE measure because Tennessee's funding formula [End Page 303] specifically incentivizes total credential numbers. To control for changes in completions due to enrollment numbers, we added a logged measure of total enrollment as an institutional control variable in our models. The CCTA funding formula has another metric—the combined number of associate degrees and long-term certificates per 100 FTE—but the long-term certificates per 100 FTE variable had a significant amount of missing data in IPEDS and could not be included as an outcome.

CCTA Policy Variable

The policy treatment in this study was the adoption of the CCTA. Described in the literature review, the CCTA was enacted in 2010, with 2011-12 being the first year that funding allocations were based on student outcomes specified in the CCTA funding formula. Starting with the adoption year of 2010, we incorporated a dummy variable for each participating college equal to 1. To ensure robust findings and account for any delayed policy effects of the CCTA, we used a one-year lag of the policy treatment variable. The observed impacts of a policy may not emerge until the first year of funding or later. This may be especially true for the total number of associate degrees conferred given that the average length of time to completion for full-time students is 3.3 years (Shapiro et al., 2016). On the other hand, the anticipation of an external change, such as a policy shock, may incentivize an institutional response well before the actual year the policy is adopted—this phenomenon has been described as an anticipatory change (Husig & Mann, 2010). Additionally, we utilized a variable capturing the duration of the CCTA in years, coded as 1 in 2010, 2 in 2011, and so forth. The inclusion of this variable accounts for the potential of a cumulative effect of the CCTA and allows for a gradual linear effect as indicated by the average annual change following CCTA policy adoption.

Control Groups

We collected data on PF from the National Conference of State Legislatures, Kelchen (in press), Li and Kennedy (2018), and directly from funding model websites. To ensure the robustness of our findings, we incorporated multiple control groups to compare Tennessee community colleges against various counterfactuals. We created three non-PF control groups, which included community colleges within states that did not have a PF policy for two-year colleges during our sample years of 2001-2014. The first control group consisted of 99 two-year colleges in non-PF states belonging to the same higher education compact as Tennessee, the Southern Regional Education Board (SREB). These states are Alabama, Delaware, Georgia, Kentucky, Maryland, Mississippi, and West Virginia. Colleges within states that belong to the same higher education compact may be more similar to one another based on unobservable factors. The second control group consisted of 77 colleges in states that share a border with Tennessee (Alabama, Georgia, [End Page 304] and Mississippi) and had not adopted a PF policy within the time period analyzed. The third control group consisted of a national sample of the 345 community colleges without PF.

For models analyzing the student subpopulation outcomes, we constructed three additional control groups of states that did operate PF at any point during the years of this study. The fourth group consisted of 269 colleges in states that had PF policies that excluded any premiums for underserved students. We hypothesize that the number of adult and low-income students at Tennessee colleges may be higher compared to colleges in states without premiums, albeit with PF. The fifth control group consisted of 217 colleges in PF states that incorporated any type of premium except adult students, including for low-income, first-generation, or racial/ethnic minority students. The sixth and final control group encompassed 115 colleges that had PF premiums for any population except low-income students. The counterfactual is a set of colleges subject to PF policies without incentives for retaining or graduating the student subpopulations of interest. Table 1 lists the states within each control group.

Table 1. S C G
Click for larger view
View full resolution
Table 1.

States in Control Groups

[End Page 305]

We first examined descriptive trends for the five outcome variables in our study. We graphed the average value for each outcome across all community colleges in the treatment group and control groups. In Figure 1, we display the total number and percent of adult students across all community colleges in Tennessee and control groups from 2001 to 2014. The number and percent of adult students increased across all groups from 2008 to 2010 (during the height of the Great Recession), with marked declines from 2011 to 2014. While the total number of adult students enrolled in 2014 was similar to pre-recession levels, the proportion of adult students has decreased.

Figure 1. Adult Students Enrolled from 2001 to 2014
Click for larger view
View full resolution
Figure 1.

Adult Students Enrolled from 2001 to 2014

Next, we graphed the number and percent of first-time, full-time, low-income students, displayed in Figure 2. Across all control groups of colleges, raw counts of low-income students increased from 2006 to 2008, with more pronounced increases in 2009 and 2010. The upward trend is even more pronounced within Tennessee colleges. Total numbers and the percent of low-income students declined starting in 2010.

Additionally, we graphed the raw number of associate degrees, short-term certificates, and medium-term certificates, displayed in Figures 3, 4, and 5, respectively. In Figure 3, the number of associate degrees granted rose steadily in Tennessee and among the three non-PF control groups.

As shown in Figure 4, community colleges in Tennessee conferred fewer short-term certificates from 2001 to 2008 when compared to colleges in other [End Page 306]

Figure 2. Low-Income Students Enrolled from 2001 to 2014
Click for larger view
View full resolution
Figure 2.

Low-Income Students Enrolled from 2001 to 2014

Figure 3. Associate Degree Completions from 2001 to 2014
Click for larger view
View full resolution
Figure 3.

Associate Degree Completions from 2001 to 2014

[End Page 307] states, and these credentials remained at stable levels until 2010. Short-term certificates show a noticeable spike from 2010 to 2011, when the CCTA began allocating performance funds for short-term certificates. Subsequently, Tennessee's short-term certificates declined in 2012 and remained stable until 2014, the last year that shorter-term certificates were counted under the formula. Among community colleges in SREB states and bordering states, short-term certificates increased steadily from 2001 to 2010, with declines in 2011 and 2012 and increases in 2013 and 2014. Among non-PF states in the national control group, however, short-term certificates followed a more gradual increase.

Figure 4. Short-Term Certificate Completions from 2001 to 2014
Click for larger view
View full resolution
Figure 4.

Short-Term Certificate Completions from 2001 to 2014

In Figure 5, before the adoption of the CCTA, trends for medium-term certificates in Tennessee were stable, even though Tennessee awarded far fewer certificates relative to other states. This trend changed with a drastic increase in the number of medium-term certificates awarded at Tennessee community colleges in 2011 before declines in the number of medium-term certificates in subsequent years. [End Page 308]

Figure 5. Medium-Term Certificate Completions from 2001 to 2014
Click for larger view
View full resolution
Figure 5.

Medium-Term Certificate Completions from 2001 to 2014

Method

We employed a difference-in-differences (DID) approach to examine the impact of the adoption of the CCTA on the percent of adult community college students, the percent of low-income community college students, and credential completions. DID estimates are conducted using ordinary least squares (OLS) in repeated cross sections of data on units in treatment and control groups for multiple years before and after the policy intervention (Bertrand, Duflo, & Mullainathan, 2004). The DID design is a version of fixed-effects estimation using panel data (Angrist & Pischke, 2009).

Formally, our model can be described by the following:

Yict = α + β1 (treat) + β2 (post) + β3 (treat * post) + γi + ηt + Xict + εict,

where Y is the outcome for college i in county c in year t, α is the intercept, the variable treat represents the state of Tennessee, the variable post accounts for the adoption year of the CCTA and all years after, and the interaction of treat * post, captured by the parameter β3, is the average treatment effect or the difference-in-differences estimate of the causal effect of the CCTA on the outcome. γi represents college fixed effects, and ηt represents year fixed effects. College fixed effects help control for time-invariant characteristics such as mission, and year fixed effects help control for time trends that affect all colleges, such as the Great Recession. Two-way fixed effect models are useful in panel data analyses to control for unobserved factors that may be correlated with variables in the model (Wooldridge, 2010). Xict is a vector of time-varying college- and county-level control variables, and εict is the [End Page 309] error term (Angrist & Pischke, 2009). We conducted a Wooldridge (2010) test, which suggested evidence of serial correlation in the error terms. All models include robust standard errors clustered at the OPEID level to help mitigate potential bias introduced by serial correlation (Bertrand et al., 2004; Drukker, 2003).

The parallel trends assumption in difference-in-differences posits that changes in an outcome follow the same trend over time for all colleges, regardless of whether the colleges are subject to a treatment (Angrist & Pischke, 2009; Lance, Guilkey, Hattori, & Angeles, 2014). In other words, student enrollment proportions and completion trends would be the same at all colleges in the absence of any PF policy. Since there is no formal test of the parallel trends assumption, we graphed each outcome and visually inspected trends of the treatment group and two control groups in the years leading up to the adoption of the CCTA (see Figures 1 through 5). The parallel trends assumption appeared convincing for adult students, low-income students, associate degrees, and medium-term certificates. The exception was for short-term certificates, which appeared less compelling when comparing treated colleges with control colleges. We caution readers regarding interpretations of results pertaining to the impact of the CCTA on short-term certificates in Tennessee, particularly for the national control group of colleges.

Control Variables

Existing research suggests that a variety of student characteristics affect retention and completion at community colleges. We collected a series of college-level control variables that may be related to student enrollment demographics or credential completion. There are disparate completion rates across racial backgrounds, and Hispanic/Latino and Black students are less likely to complete associate degrees compared to white and Asian students (Bailey, Jenkins, & Leinbach, 2005; Dietrich & Lichtenberger, 2015; Feldman, 1993; Porchea, Allen, Robbins, & Phelps, 2010). Colleges serving greater proportions of female students, full-time students, younger students, and students from higher-income families tend to have higher retention and completion rates (Bailey, Calcagno, Jenkins, Kienzi, & Leinbach, 2005; Feldman, 1993; Fike & Fike, 2008; Porchea et al., 2010).

From IPEDS, we included the college's racial composition of students, specifically the percent of students enrolled in the fall term at all credential levels who were: Asian and Native Hawaiian or other Pacific Islander, Black/African American, Hispanic/Latino, and American Indian/Alaska Native. We also included the percent of all students enrolled at all credential levels who were female, part-time, and first-generation. When analyzing the credential completion outcomes, we added the percent of adult students who were 25 years of age or older, the percent of full-time, first-time degree-seeking community college students receiving federal grant aid, and the log of fall [End Page 310] enrollment to control for changes to completions due solely to enrollment numbers. To account for the price of attendance, we included the log of resident tuition and fees for full-time students, CPI-adjusted to 2014 dollars.

To capture local employment and labor force conditions that may influence community college enrollment and completions (Hillman & Orians, 2013), we used the county unemployment rate from the Local Area Unemployment Statistics data, Bureau of Labor Statistics (BLS). To account for specific counties with more (or fewer) Pell-eligible residents, we added the percent of the total county population who live below the poverty line, as collected from the Census Bureau Small Area Income and Poverty Estimates (SAIPE). We also included personal income per capita (logged) from the Bureau of Economic Analysis, to control for overall county wealth. We matched county-level data using the FIPS county codes reported by the IPEDS institutional directory information. Summary statistics are displayed in Table 2.

Table 2. S S
Click for larger view
View full resolution
Table 2.

Summary Statistics

[End Page 311]

Limitations

This study is subject to several limitations. First, other events that took place around the same time of the CCTA could have affected our results. One example is the Tennessee Achieves (tnAchieves) program, which provided a total of $15.5 million in privately funded scholarships to 10,000 high school students entering one of the 13 community colleges or 27 technical colleges from 2008 to 2014. This program offered a last-dollar financial award after all other forms of financial aid were applied (tnAchieves, 2018). The number of students who filed FAFSAs increased under tnAchieves, which may have increased the number of students who qualified for federal grant aid and thereby boosted our measure of low-income student enrollment. To be eligible for the scholarship, students must enroll in one of the aforementioned colleges immediately after graduation. Therefore, we would expect no changes to the number of adult students in response to tnAchieves. In the Results section, we describe robustness checks to tackle this limitation.

Although it would be impossible to completely separate the effects of the CCTA from every institution- or state-level policy or program aimed at improving access or success in higher education, we aimed to isolate the effects of CCTA and improve the internal validity of this study by using a quasi-experimental design, a robust set of control variables, robustness checks, and various control groups. Our inclusion of year fixed effects should also mitigate the effects of time trends, such as the Great Recession's impact on community college enrollment changes.

Second, this study did not analyze other outcomes incentivized within the CCTA, such as the number of dual enrollment students or transfer students. An increase in the number of students participating in dual enrollment could have contributed to a greater proportion of younger, high-school aged students enrolling at Tennessee community colleges and thereby negatively affected the proportion of adult students enrolled at Tennessee community colleges. While these omitted outcomes are important and should be considered for future study, data limitations prevented their inclusion in the present study.

Results

Results from this study suggest that the adoption of the CCTA produced a decrease in the number and percent of adult students at Tennessee community colleges, but an increase in the number and percent of low-income students. Each student subpopulation is prioritized within Tennessee's PF formula, as the CCTA allocates 40% premiums to colleges when either adult or low-income students meet retention and completion milestones. Our results also suggest that the CCTA produced no changes to the number of [End Page 312] associate degrees conferred, even showing a decrease in one model. However, short-term certificates and medium-term certificates increased relative to all three control groups of community colleges that were not subject to any PF policy. Detailed results are explained below and reported in Tables 3 to 9, with each table displaying results for a single outcome. College- and year-fixed effects, as well as control variables, were present in all models, but to save space, control variable estimates are not shown in all tables.

Table 3 displays results for the logged number of adult community college students enrolled following the CCTA adoption in 2010 (inclusive of the adoption year), following a one-year lag (2011), and accounting for the potential of a cumulative effect using the duration of the CCTA in years. Columns 1-3 show the estimates of changes in the identified outcome for colleges in Tennessee when compared to colleges in the non-PF SREB states. Columns 4-6 show estimates relative to colleges in non-PF bordering states, and columns 7-9 show estimates compared to the national sample of colleges in non-PF states. The counterfactual in column 10 consists of colleges in states with PF but without premiums for underserved students. Finally, the control group in column 11 includes PF states with premiums other than adult students.

Results suggest that the number of adult students at Tennessee community colleges declined following the CCTA. Coefficients for logged outcomes can be interpreted in absolute terms by exponentiating the coefficient then subtracting by one. Compared to SREB, bordering, and all states, adult student enrollment at Tennessee colleges following CCTA was, on average, 16% [e (β = -0.17) – 1= -0.156], 18%, and 10% lower, respectively (columns 1-3). Contextualized, the mean of adult students in Tennessee before 2010 was 2,494 (standard deviation of 1,253), so a college would expect to enroll 249 to 399 fewer students age 25 and over during years 2010 to 2014. When we assume that the CCTA effect started in 2011, and when we assume that the CCTA produced yearly changes in adult students enrolled, results consistently show declines among Tennessee colleges.

Compared to colleges exposed to PF policies that lacked premiums for underserved students, Tennessee colleges still enrolled 14% fewer adult students (Table 3, column 10). Interestingly, when compared to colleges subject to PF and premiums for underserved students that did not identify adult students, Tennessee colleges enrolled a similar number of adult students (Table 3, column 11).

Table 4 displays estimates analyzing the number of adult students enrolled as a percentage of all undergraduate students. As indicated, community colleges in Tennessee saw a decline in the percent enrollment of adult students following the passage of the CCTA. This difference was 4-percentage points lower relative to the proportion of adult students at SREB states and bordering [End Page 313]

Table 3. N A S (L) T C C: D--D E
Click for larger view
View full resolution
Table 3.

Number of Adult Students (Log) at Tennessee Community Colleges: Difference-in-Differences Estimates

[End Page 315] states, and the difference was 3-percentage points lower compared to the national sample of non-PF states. Contextualized, the mean for the percent of adult students at Tennessee colleges before 2010 was 0.41 (standard deviation of 0.06). Therefore, the average Tennessee community college would have experienced a decrease in the percent of students aged 25 and older from 41% to 37-38%. The CCTA duration variable indicates that the percent of adult students declined by an average of two-percentage points each year that colleges were exposed to the CCTA.

Moreover, we find that despite Tennessee's 40% premium for adult students, colleges in the state still enrolled a smaller proportion of adult students (3-percentage points lower) compared to colleges in both PF states that lacked premiums, and in states that lacked adult premiums (Table 4, columns 10-11).

Next, we explore estimates regarding low-income students (Table 5). The number of first-time, full-time students on federal grant aid enrolled at Tennessee community colleges increased after the CCTA policy change. Specifically, low-income student enrollment increased by 25% compared to SREB colleges and 25% compared to colleges in bordering states, whether we use the adoption year of the CCTA (2010) or the first funding year (2011). This amounts to a change, on average, from 401 to 501 low-income students at a given Tennessee college. Null effects are found when using the control group of all PF states nationally (Table 5, columns 7-9). Yet, estimates suggest that the inclusion of financial premiums for low-income students do affect the enrollment of these students. Compared to a PF policy without premiums, and one without premiums specific to income, the CCTA generated an enrollment of 17% and 31% more low-income students at colleges, respectively.

Table 6 displays estimates of the impact of the CCTA on the percent of first-time, full-time low-income students. Across all three non-PF control groups, Tennessee community colleges experienced a 6-percentage point increase in low-income student enrollment in 2010 and later. When examining lagged results in 2011, this increase was 5-percentage points. The mean of the percent of low-income students among Tennessee colleges in our sample before 2010 was 0.43 (standard deviation of 0.12). Therefore, the average Tennessee college increased its percent of low-income students from 43% to about 48-49% following the adoption of the CCTA. The CCTA duration variable suggests that each year of the CCTA produced a 2-percentage point increase in the percent of low-income students. Consistent with results on changes in the raw number of low-income students, the CCTA's premium for low-income students increased the enrollment of this subpopulation by 7% and 6% (Table 6, columns 10-11).

Tables 7, 8, and 9 display estimates from models examining the impacts of the CCTA on associate degrees, short-term certificates, and medium-term certificates, respectively. Table 7 reveals that the CCTA had no effect on the [End Page 316]

Table 4. P A S T C C: D--D E
Click for larger view
View full resolution
Table 4.

Percent Adult Students at Tennessee Community Colleges: Difference-in-Differences Estimates

[End Page 317]

Table 5. N L-I S (L) T C C: D--D E
Click for larger view
View full resolution
Table 5.

Number of Low-Income Students (Log) at Tennessee Community Colleges: Difference-in-Differences Estimates

[End Page 318]

Table 6. P L-I S T C C: D--D E
Click for larger view
View full resolution
Table 6.

Percent Low-Income Students at Tennessee Community Colleges: Difference-in-Differences Estimates

[End Page 319] logged number of associate degrees awarded at Tennessee community colleges, except when compared to community colleges in SREB states, which resulted in a 9% decline in associate degrees [e(β = -0.09) – 1 = -0.10]. Compared to colleges in states that were not subject to PF, community colleges in Tennessee conferred the same number of associate degrees.

Results differ when we consider short-term certificates, reported in Table 8. After adopting the CCTA, community colleges in Tennessee saw a significant increase in the number of short-term certificates awarded. Compared to SREB states and bordering states with no PF, short-term certificates in Tennessee increased by 192% and by 249% in 2010-2014, respectively (Table 8, columns 1 and 4). When we considered lagged effects, short-term certificates also increased dramatically after 2010 relative to all three control groups. Contextualized, the average number of short-term certificates for each Tennessee community college before 2010 was 81, but this number increased by at least 156 and at most 302 across years 2011-2014. Calculated as a yearly increase, short-term certificates rose by 42% to 55% each year starting with 2010 (Table 8, column 3, e(β = 0.35) – 1 = 0.4; column 6, e(β = 0.44) – 1 = 0.55). Estimates relative to the national non-PF control group are not reliable due to uncertainty pertaining to the parallel trends assumption but are consistent with an institutional response of substantial increases in the number of short-term certificates in the presence of the CCTA.

The CCTA also produced increases in the number of medium-term certificates awarded. Compared to community colleges in SREB states, Tennessee community colleges increased their proportion of medium-term certificates by 134% during 2010-2014 (Table 9, column 1). For the average Tennessee community college, this would equate to an increase from 40 to 54 certificates. Compared to colleges in bordering states and the national sample of non-PF states, Tennessee colleges conferred 129% and 144% more medium-term certificates in 2010 onwards, respectively (Table 7, columns 4 and 7, β = 0.83 and 0.89). When we considered lagged policy effects in 2011 and effects based on the duration of the CCTA, results were consistent.

We conducted robustness checks to assess our results under alternative specifications. First, we explored whether our results changed when we incorporated different years into our dataset. Rather than using 2001-2014, we conducted the same analyses using 2003-2014 and 2005-2014. The direction of the effects of the CCTA was consistent. Additionally, we conducted the same analyses on all five outcome variables using a two-year lag of the CCTA policy treatment variable and a one-year lag of the CCTA duration treatment variable. Results were consistent in both direction and statistical significance across all models.

To address the issue of the tnAchieves program being active in 2008-2014, as detailed in our Limitations section, we ran a robustness check for only [End Page 320]

Table 7. A D () T C C: D--D E
Click for larger view
View full resolution
Table 7.

Associate Degrees (log) at Tennessee Conununity Colleges: Difference-in-Differences Estimates

[End Page 321]

Table 8. S-T C () T C C: D--D E
Click for larger view
View full resolution
Table 8.

Short-Term Certificates (log) at Tennessee Community Colleges: Difference-in-Differences Estimates

[End Page 322]

Table 9. M-T C () T C C: D--D E
Click for larger view
View full resolution
Table 9.

Medium-Term Certificates (log) at Tennessee Community Colleges: Difference-in-Differences Estimates

[End Page 323] the years 2008-2014. Estimates for all significant coefficients are in the same direction as those reported in our main results (see Table 10 in the Appendix), supporting the conclusion that our findings for CCTA policy effects are likely to hold even in the absence of the tnAchieves program.

Discussion

This study focuses on the impact of the CCTA on the proportion of historically underserved students enrolled at community colleges and suggests mixed effects regarding the influence of the 40% premium for adult and low-income students. Drawing from our conceptual framework, we hypothesized that community colleges in Tennessee, which rely heavily on state funding, would respond to the incentives within the CCTA by increasing their proportion of low-income and adult students. Contrary to previous work examining four-year institutions and suggesting that PF policies prompt institutions to limit access for academically underprepared and low-income students (Umbricht et al., 2017; Zumeta & Li, 2016), we find that Tennessee community colleges increase their proportion of first-time, full-time low-income students after adopting the CCTA.

Our findings suggest that the CCTA's 40% premiums associated with graduating low-income students appear to be effective at ensuring that community colleges do not restrict their outreach efforts to less affluent students. Moreover, our study, which is the first to examine the impact of PF premiums at community colleges, is consistent with recent findings suggesting increases in the proportion of low-income students at four-year institutions exposed to PF premiums (Gándara & Rutherford, 2018). Because community colleges manifest the promise of educational opportunity and upward mobility by educating a disproportionate number of underserved students (Bailey, Jaggars, & Jenkins, 2015), the community college sector is a critical area of study when evaluating the efficacy of funding premiums designed to incentivize underrepresented student enrollment.

Community colleges in Tennessee appear to decrease the total number and proportion of adult students in response to the CCTA, even though older students have been shown to have higher graduation rates when compared to their younger peers (Calcagno, Crosta, Bailey, & Jenkins, 2007). One explanation for the decrease in adult students is that the economy was improving following the implementation of the CCTA, leaving fewer adult students seeking to return to a Tennessee community college whether or not the CCTA provides institutions with financial incentives to do so. This explanation is supported by previous work by Hillman and Orians (2013) identifying a negative relationship between local unemployment rates and community college enrollment. The authors found that potential students on the margin between either attending community college or working will [End Page 324] only be more likely to attend community college during periods of higher unemployment. If the local economy is continually improving, one could conclude that adult student enrollment at local community colleges may suffer due to improvements in the economy. In addition, previous research has shown that adult students are more likely to enroll in online degree programs relative to their peers (Ortagus, 2017). Because online degree programs can remove many of the barriers facing time- and location-constrained students, the proliferation of online degree programs could help to explain the relative decrease in the number of adult students enrolled in community colleges within Tennessee.

Our study's results also reveal that the CCTA produces increases in short-and medium-term certificates. These findings are consistent with recent work showing increases in total certificates in Tennessee (Hillman et al., 2018) and increases in short-term certificates (albeit no changes in medium-term certificates) nationally following PF policies that tie a greater proportion of base funding to outcomes (Li & Kennedy, 2018). The state of Tennessee awarded fewer certificates relative to other states prior to implementing the CCTA. Given this dynamic, future research is needed to explore why Tennessee offered relatively few certificates prior to the implementation of the CCTA and whether increases in certificates in response to the CCTA can be replicated by other states that may be operating closer to capacity with respect to certificate production. We generally find no changes to associate degree completions, but prior work on Tennessee (Hillman et al., 2018) and on higher-stakes PF policies such as Ohio (Li & Kennedy, 2018) found consistent declines in the number of associate degrees as a response to PF policies. While the CCTA fails to achieve its intended goal of increasing the number of associate degrees conferred, it does not appear to harm attainment goals by decreasing the number of students who complete associate degrees within the time period analyzed.

Our findings related to credential completions in Tennessee are also supported by our conceptual framework and performance management literature suggesting that institutions respond to performance-based incentives by responding in the quickest, cheapest, and simplest manner possible (e.g., certificates). On average, shorter-term certificates do not have a positive effect on graduates' wages or their likelihood of employment (Dadgar & Trimble, 2015; Liu, Belfield, & Trimble, 2014), which suggests that Tennessee's increase in short- and medium-term certificates may not necessarily reflect a positive academic outcome for community college students. As states continue to consider whether to increase their reliance on PF as a mechanism to improve completion rates, we urge policymakers to consider weighting PF according to the labor market outcomes of earned credentials and continue the practice of adding weighted premiums to incentivize the enrollment of historically underserved student subpopulations. [End Page 325]

Although the presence of PF premiums introduced in the CCTA appears to have a positive influence on low-income student enrollment at Tennessee community colleges, the details and dosage associated with PF premiums require further examination to fully understand the effectiveness of these premiums. Even though the same 40% premium was allocated for both adult and low-income student completions, the enrollment of these incentivized subgroups diverged after the implementation of the CCTA. We offer several explanations related to the contextual dynamics that could offer additional insight as to why adult student enrollment would decrease (i.e., improvements in the economy) while low-income student enrollment would increase (i.e., the presence of the Tennessee Achieves program) in response to the same PF premium. However, additional research is needed to further explore the design and dosage of performance funding formulas specific to individual state contexts, particularly in relation to equity metrics and premiums, as our study highlights that colleges respond to higher-stakes PF in divergent ways. [End Page 326]

Table 10. A. R C: D--D E 2008–2014
Click for larger view
View full resolution
Table 10.

Appendix. Robustness Check: Difference-in-Differences Estimates 2008–2014

[End Page 328]

Amy Y. Li

Amy Li is an assistant professor of higher education at the University of Northern Colorado. Her research focuses on higher education finance and public policy, specifically performance funding, promise programs, student loan debt, state appropriations, and policy adoption. She studies the impact of local and state policies on college access and completion and is particularly interested in outcomes for historically underrepresented populations.

Justin C. Ortagus

Justin C. Ortagus is an Assistant Professor of Higher Education Administration & Policy and Director of the Institute of Higher Education at the University of Florida. His research typically examines the influence of online education, community colleges, and various state policies on the opportunities and outcomes of historically underrepresented students.

Author contact: Amy Li, Department of Leadership, Policy, & Development, University of Northern Colorado. McKee Hall 418. Greeley, CO, 80639. amy.li@unco.edu

An earlier version of this manuscript was presented at the 2017 Association for Education Finance and Policy conference in Washington, DC. We wish to thank Shanika Harvey for her editorial assistance in preparing this manuscript.

References

American Association of Community Colleges. (2016). 2016 fact sheet. AACC: Washington, DC.
Andrews, M., & Moynihan, D. P. (2002). Why reforms do not always have to 'work' to succeed: A tale of two managed competition initiatives. Public Performance & Management Review, 25(3), 282–297.
Angrist, J. D., & Pischke, J.-S. (2009). Mostly harmless econometrics: An empiricist's companion. Princeton, NJ: Princeton University Press.
Argyris, C. , & Schon, D. A. (1996). Organizational learning II: Theory, methods and practice. Reading, MA: Addison-Wesley.
Bailey, T., Calcagno, J. C., Jenkins, D., Kienzi, G., & Leinbach, T. (2005). Community college student success: What institutional characteristics make a difference? (CCRC Working Paper No. 3). New York, NY: Community College Research Center.
Bailey, T. R., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America's community colleges. Cambridge, MA: Harvard University Press.
Bailey, T., Jenkins, D., & Leinbach, T. (2005). What we know about community college lower-income and minority student outcomes: Descriptive statistics from national surveys. New York, NY: Community College Research Center.
Banta, T. W., Rudolph, L. B., Dyke, J. Van, & Fisher, H. S. (1996). Performance funding comes of age in Tennessee. The Journal of Higher Education, 67(1), 23–45.
Bertrand, M., Duflo, E., & Mullainathan, S. (2004). How much should we trust differences-in-differences estimates? The Quarterly Journal of Economics, 119(1), 249–275.
Burke, J. C. (2002). Funding public colleges and universities for performance: Popularity, problems, and prospects. Albany, NY: Rockefeller Institute Press.
Burke, J. C., & Minassians, H. P. (2003). Performance reporting: "Real" accountability or accountability "lite": Seventh annual survey 2003. Albany, NY: The Nelson A. Rockefeller Institute of Government.
Calcagno, J. C., Crosta, P. M., Bailey, T., & Jenkins, D. (2007). Stepping stones to a degree: The impact of enrollment pathways and milestones on community college student outcomes. Research in Higher Education, 48(7), 775–801.
Callahan, M. K., Meehan, K., Shaw, K. M., Slaughter, A., Kim, D. Y., Hunter, V. R., Lin, J., & Wainstein, L. (2017). Implementation and impact of outcomes-based funding in Tennessee. Philadelphia, PA: Research for Action. Retrieved from https://www.researchforaction.org/publications/implementation-impact-outcomes-based-funding-tennessee/
Dadgar, M., & Trimble, M. J. (2015). Labor market returns to sub-baccalaureate credentials: How much does a community college degree or certificate pay? Educational Evaluation and Policy Analysis, 37(4), 399–418.
Dietrich, C. C., & Lichtenberger, E. J. (2015). Using propensity score matching to test the community college penalty assumption. The Review of Higher Education, 38(2), 193–219.
Dougherty, K. J., & Hong, E. (2006). Performance accountability as imperfect panacea: The community college experience. In T. Bailey & V. S. Morest (Eds.), Defending the community college equity agenda (pp. 51–86). Baltimore, MD: Johns Hopkins University Press.
Dougherty, K. J., Jones, S. M., Lahr, H., Natow, R. S., Pheatt, L., & Reddy, V. (2014). Implementing performance funding in three leading states: Instruments, outcomes, obstacles, a+nd unintended impacts (CCRC Working Paper No. 74). New York, NY: Community College Research Center.
Dougherty, K. J., Jones, S. M., Lahr, H., Natow, R. S., Pheatt, L., & Reddy, V. (2016a). Looking inside the black box of performance funding for higher education: Policy instruments, organizational obstacles, and intended and unintended impacts. The Russell Sage Foundation Journal of the Social Sciences, 2(1), 147–173.
Dougherty, K. J., Jones, S. M., Lahr, H., Natow, R. S., Pheatt, L., & Reddy, V. (2016b). Performance funding for higher education. Baltimore, MD: Johns Hopkins University Press.
Dougherty, K. J., Natow, R., Hare, R., Jones, S., & Vega, B. (2011). The politics of performance funding in eight states: Origins, demise, and change: Final report to Lumina Foundation for Education. New York, NY: Community College Research Center.
Dougherty, K. J., and Reddy, V. (2011). The impacts of state performance funding systems on higher education institutions: Research literature review and policy recommendations. CCRC Working Paper No. 37. Community College Research Center, Columbia University.
Dougherty, K. J., & Reddy, V. (2013). Performance funding for higher education: What are the mechanisms? What are the impacts? ASHE Higher Education Report, 39(2), 1–134.
Drukker, D. M. (2003). Testing for serial correlation in linear panel-data models. Stata Journal, 3(2), 168–177.
Emerson, R. M. (1962). Power-dependence relations. American Sociological Review, 27(1), 31–41.
Feldman, M. J. (1993). Factors associated with one-year retention in a community college. Research in Higher Education, 34(4), 503–512.
Fike, D. S., & Fike, R. (2008). Predictors of first-year student retention in the community college. Community College Review, 36(2), 68–88.
Gándara, D., & Rutherford, A. (2018). Mitigating unintended impacts? The effects of premiums for underserved populations in performance-funding policies for higher education. Research in Higher Education, 59(6), 681–703.
Gladieux L., & Swail W. (1999). Financial aid is not enough: Improving the odds for minority and lower-income students. In J. King (Ed.), Financing a college education: How it works, how it's changing (pp. 177–179). Phoenix, AZ: Onyx Press.
Harnisch, T. L. (2011). Performance-based funding: A re-emerging strategy in public Higher education financing. American Association of State Colleges and Universities.
Hearn, J. C. (2006). Alternative revenue sources. In D. M. Priest & E. P. St. John (Eds.), Privatization and public universities (pp. 87–108). Bloomington, IN: Indiana University Press.
Hillman, N. (2016). Why performance-based college funding doesn't work. Washington, DC: The Century Foundation. Retrieved from https://tcf.org/content/report/why-performance-based-college-funding-doesnt-work/
Hillman, N. W., Hicklin Fryar, A., & Crespín-Trujillo, V. (2018). Evaluating the impact of performance funding in Ohio and Tennessee. American Educational Research Journal, 55(1), 144–170.
Hillman, N. W., & Orians, E. L. (2013). Community colleges and labor market conditions: How does enrollment demand change relative to local unemployment rates? Research in Higher Education, 54, 765–780.
Hillman, N. W., Tandberg, D. A., & Fryar, A. H. (2015). Evaluating the impacts of "new" performance funding in higher education. Educational Evaluation and Policy Analysis, 37(4), 501–519.
Husig, S., & Mann, H.-G. (2010). The role of promoters in effecting innovation in higher education institutions. Innovation: Management, Policy and Practice, 12(2), 180–191.
Jenkins, D., Wachen, J., Moore, C., & Shulock, N. (2012). Washington state student achievement initiative policy study: Final report. New York, NY: CCRC-IHELP.
Jepsen, C., Troske, K., & Coomes, P. (2014). The labor-market returns to community college degrees, diplomas, and certificates. Journal of Labor Economics, 32, 95–121.
Kelchen, R. (2018). Do performance-based funding policies affect underrepresented student enrollment? The Journal of Higher Education, 89(5), 702–727.
Kelchen, R. (in press). Exploring the relationship between performance-based funding design and underrepresented student enrollment at community colleges. Community College Review.
Kelchen, R., & Stedrak, L. J. (2016). Does performance-based funding affect colleges' financial priorities? Journal of Education Finance, 41(3), 302–321.
Lahr, H., Pheatt, L., Dougherty, K. J., Jones, S. M., Natow, R. S., & Reddy, V. (2014). Unintended impacts of performance funding on community colleges and universities in three states (CCRC Working Paper No. 78). New York, NY: Community College Research Center.
Lance, P. M., Guilkey, D. K., Hattori, A., & Angeles, G. (2014). How do we know if a program made a difference? A guide to statistical methods for program impact evaluation. Chapel Hill, NC: MEASURE Evaluation.
Li, A. Y. (2018). Lessons learned: A case study of performance funding in higher education. Washington, DC: Third Way.
Li, A. Y., & Kennedy, A. I. (2018). Performance funding policy effects on community college outcomes: Are short-term certificates on the rise? Community College Review, 46(1), 3–39.
Li, A. Y., & Zumeta, W. (2016). Performance funding on the ground: Campus responses and perspectives in two states. New York, NY: TIAA Institute. Retrieved from https://www.tiaainstitute.org/public/pdf/rd_performance_funding_on_the_ground.pdf
Liu, V. Y. T., Belfield, C. R., & Trimble, M. J. (2014). The medium-term labor market returns to community college awards: Evidence from North Carolina. Economics of Education Review, 44, 42–55.
Marcotte, D. E., Bailey, T., Borkoski, C., & Kienzl, G. S. (2005). The returns of a community college education: Evidence from the National Education Longitudinal Survey. Educational Evaluation and Policy Analysis, 27, 157–175.
McDonnell, L. M., & Elmore, R. F. (1987). Getting the job done: Alternative policy instruments. Educational Evaluation and Policy Analysis, 9(2), 133–152.
McKinney, L., & Hagedorn, L. S. (2017). Performance-based funding for community colleges: Are colleges disadvantaged by serving the most disadvantaged students? The Journal of Higher Education, 88(2), 159–182.
National Conference of State Legislatures. (2015). Performance-based funding for higher education. Retrieved March 12, 2016, from http://www.ncsl.org/research/education/performance-funding.aspx
Natow, R. S., Pheatt, L., Dougherty, K. J., Jones, S. M., Lahr, H., & Reddy, V. (2014). Institutional changes to organizational policies, practices, and programs following the adoption of state-level performance funding policies (CCRC Working Paper No. 76). New York, NY: Community College Research Center.
Ness, E. C., Deupree, M. M., & Gandara, D. (2015). Campus responses to outcomes-based funding in Tennessee: Robust, aligned, and contested. Nashville, TN: Tennessee Higher Education Commission.
Ortagus, J. C. (2017). From the periphery to prominence: An examination of the changing profile of online students in American higher education. The Internet and Higher Education, 32, 47–57.
Pfeffer, J., & Salancik, G. R. (1978). The external control of organizations: A resource dependence perspective. New York, NY: Harper and Row.
Porchea, S. F., Allen, J., Robbins, S., & Phelps, R. P. (2010). Predictors of long-term enrollment and degree outcomes for community colleges. The Journal of Higher Education, 81(6), 680–708.
Rabovsky, T. M. (2012). Accountability in higher education: Exploring impacts on state budgets and institutional spending patterns. Journal of Public Administration Research and Theory, 22(4), 675–700.
Sanford, T., & Hunter, J. (2011). Impact of performance-funding on retention and graduation rates. Education Policy Analysis Archives, 19(33), 1–30.
Shapiro, D., Dundar, A., Wakhungu, P. K., Yuan, X., Nathan, A., & Hwang, Y. (2016). Time to degree: A national view of the time enrolled and elapsed for associate and bachelor's degree earners (Signature Report No. 11). Herndon, VA: National Student Clearinghouse Research Center.
Tandberg, D. A., Hillman, N. W., & Barakat, M. (2014). State higher education performance funding for community colleges: Diverse effects and policy implications. Teacher's College Record, 116(120307), 1–31.
Tennessee Higher Education Commission. (2015b). Outcomes-based formula narrative. Nashville, TN: Author. Retrieved from https://www.tn.gov/content/dam/tn/thec/bureau/fiscal_admin/fiscal_pol/obff/1-Outcomes_Based_Formula_Narrative_-_for_website.pdf
tnAchieves. (2018). About us: A letter from our executive director. Knoxville, TN. Retrieved from https://tnachieves.org/about-us/
Umbricht, M. R., Fernandez, F., & Ortagus, J. C. (2017). An examination of the (un) intended consequences of performance funding in higher education. Educational Policy, 31(5), 643–673
Wooldridge, J. M. (2010). Econometric analysis of cross section and panel data. Cambridge, MA: MIT Press.
Zumeta, W., & Li, A. Y. (2016). Assessing the underpinnings of performance funding 2.0: Will this dog hunt? New York, NY: TIAA Institute. Retrieved from https://www.tiaainstitute.org/public/pdf/ti_assessing_the_underpinnings_of_performance_funding_2.pdf

Additional Information

ISSN
1090-7009
Print ISSN
0162-5748
Pages
295-333
Launched on MUSE
2019-09-20
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.