publisher colophon
abstract

Each year, the U.S. Department of Education assigns all private nonprofit and for-profit colleges receiving federal financial aid dollars a financial responsibility score, which is designed to reflect an institution's overall financial stability. Yet no scholarly literature has examined financial responsibility scores or whether colleges respond to this high-stakes accountability policy. In this paper, I use data on financial responsibility scores from the 2006-07 through 2013-14 academic years to explore if colleges respond to not receiving a passing score on the financial responsibility test by changing their revenues, expenditures, or student enrollment. I find little evidence that colleges that did not pass the test changed their fiscal priorities in any meaningful way.

introduction

Private colleges, both nonprofit and for-profit, that are dependent on tuition to fund the vast majority of institutional operations are under an increasing amount of financial stress. Less-selective private nonprofit colleges are disproportionately located in the Northeast and Midwest, where the number of high school graduates is expected to roughly hold steady during the next decade (Hauser and Tabitha 2014). The resulting fierce competition for students has forced many colleges without strong national reputations to offer substantial discounts from their posted tuition. A recent survey of 411 primarily small private nonprofit colleges found an average discount rate of 49% for first-time, full-time students, meaning that each dollar in tuition increases would only generate 51 cents in revenue (McCreary 2017). Research by Altringer and Summers (2015) found that tuition increases at private nonprofit baccalaureate institutions during the Great Recession were less effective in generating additional revenue than during the early 2000s recession. As a result, the credit rating firm Moody's recently [End Page 417] downgraded the higher education sector to a negative outlook as expenses are predicted to grow faster than revenues (Moody's Investor Service 2017).

In addition to stresses placed on private nonprofit colleges' budgets by discounted tuition, endowment values and alumni donations significantly decreased during the Great Recession. Although tuition-dependent colleges tend have endowments insufficient to cover one year's expenditures, the endowments still serve as an important buffer against short-term shocks and as a measure of creditworthiness. These endowments were adversely impacted by the Great Recession, with the median college realizing a return of negative 3.3% in fiscal year 2008 and negative 19.1% in fiscal year 2009. Although endowment values have recovered in recent years, the median annual return between 2008 and 2017 was just 4.4%—which is lower than the recommended endowment spending rate of 5% per year (National Association of College and University Business Officers 2017).

For-profit colleges have faced their own set of financial challenges in the last decade, as shareholder demands for large profit margins have often clashed with government regulations and declines in enrollment. Members of Congress have raised a number of concerns about the practices and policies of for-profit colleges (United States Senate Committee on Health, Education, Labor, and Pensions 2012). New federal gainful employment rules briefly took effect in 2017 that require most programs within for-profit colleges to have their graduates meet a debt-to-earnings metric in order for the programs to remain eligible for federal financial aid, although the Trump administration has delayed further implementation amid a negotiated rulemaking process (Kreighbaum 2017).

Another concern in the for-profit sector is their decline in enrollment, with the number of students at four-year for-profit colleges dropping by more than 25% between fall 2014 and fall 2017 (National Student Clearinghouse Research Center 2017). The students who attend for-profit colleges are more likely to have substantial financial need, limiting colleges' ability to generate sufficient tuition revenue. Fifty-nine percent of students at for-profit colleges have an expected family contribution of zero, meaning they have little to no ability to finance college without substantial grant or loan aid (Kelchen 2015). Private nonprofit and public four-year colleges have half as many students with a zero expected family contribution.

Private nonprofit and for-profit colleges must meet financial stability requirements in order for their students to be eligible to receive federal financial aid dollars. The U.S. Department of Education calculates a financial responsibility score for all private colleges each year, which is designed to reflect a college's financial stability and its ability to be a responsible steward of federal funds (Federal Student Aid n.d.). The financial responsibility score is calculated based on three [End Page 418] factors: a primary reserve ratio (representing available liquid assets), an equity ratio (representing its ability to borrow), and a net income ratio (representing profitability or excess revenue). These factors are combined on a scale ranging from -1 to 3. Colleges that score over 1.5 pass the test, colleges scoring between 1 and 1.4 are in the oversight zone (their students can receive federal financial aid, but the colleges are subject to additional federal oversight), and colleges scoring 0.9 or less fail the test and have to pay for a letter of credit (a guarantee of funds equal to a portion of the college's previous federal aid amount) in order for their students to get federal aid dollars (Federal Student Aid 2014). More details about how the scores are calculated can be found in the Appendix.

Financial responsibility scores provide an unusual opportunity to examine how private nonprofit and for-profit colleges respond to difficult financial circumstances and the threat of additional federal oversight or sanctions. The Great Recession provided an exogenous shock to many colleges by adversely affecting their financial situation, forcing them to react in order to maintain access to federal financial aid funds. Between the 2007-08 and 2008-09 academic years, the number of colleges failing the financial responsibility test rose from 394 to 486, while the number in the oversight zone slightly declined from 208 to 193 (author's calculations using data from Federal Student Aid).

The majority of private colleges rely heavily on federal financial aid dollars in order to attract and retain students, with 67% of students at private nonprofit and 80% of students at for-profit colleges receiving federal aid in the 2011-12 academic year (author's calculations using data from the National Postsecondary Student Aid Study). This provides an incentive for colleges to increase a low financial responsibility score in order to maintain eligibility for federal financial aid dollars and to avoid the negative publicity that can result from a low score. Colleges can respond to a low score by trying to increase net tuition dollars through increasing enrollment or cutting institutional financial aid or seeking additional support from donors. They can simultaneously work to reduce expenses for instruction, student services, or auxiliary activities such as housing, food service, or intercollegiate athletics.

How colleges respond to financial responsibility scores has taken on additional importance as state and federal policymakers consider new higher education accountability systems. Financial responsibility scores represent one of the few currently existing types of high-stakes federal accountability in higher education. Yet no published research has examined whether this type of accountability has any effects on institutional behavior. I draw upon lessons learned from other federal, state, and private-sector accountability policies in this paper to explore the potential implications of financial responsibility scores on institutions and policymakers alike, as well as discussing how the results could influence the development [End Page 419] of other accountability policies.

research questions

In this paper, I use institution-level financial responsibility scores for private nonprofit and for-profit colleges calculated by the U.S. Department of Education between the 2006-07 and 2013-14 academic years to examine whether colleges' fiscal priorities in future years were affected by their financial responsibility score. I examine whether colleges that did not receive a passing financial responsibility score exhibit different patterns in revenues, expenditures, or enrollment in the following two years than those that received a passing score.

literature on accountability and institutional behaviors

Little research has focused on whether federal or state accountability policies affect institutional financial priorities (Kelchen 2018). Many studies in the accountability literature focus on whether various policies affect outcomes such as completion rates or the number of graduates, but most do not consider potential mechanisms through which these policies could have their intended effects. This section briefly describes existing federal, state, and private accountability systems and the body of research on these topics, focusing on two potential mechanisms representing a change in institutional priorities—changes in revenues and expenditures in various parts of the college as well as changes in overall enrollment.

Financial responsibility scores are one of three high-stakes, outcomes-based federal accountability measures in higher education that are currently in place. The U.S. Department of Education implemented gainful employment rules in 2017 that sought to tie vocationally-oriented programs' federal aid eligibility to graduates' debt-to-earnings ratios, but it is too early to see how colleges are responding and the sanctions have been suspended for the time being (Kreighbaum 2017). The other measure is the three-year cohort default rate for subsidized and unsubsidized federal loans. Colleges face a loss of federal student loan eligibility if more than 40% of students who borrowed in a given cohort defaulted on that loan within three years of leaving college; if the default rate is over 30% for three consecutive cohorts, colleges lose access to all federal financial aid dollars. In 2017, six colleges were subject to the loss of all federal aid and four more were subject to the loss of student loan dollars (Federal Student Aid 2017).

Colleges have responded to the threat of federal sanctions regarding default rates in two ways. Some institutions, particularly community colleges, have decided to opt out of the federal student loan program in order to preserve access to the Pell Grant program. In the 2013-2014 academic year, nearly 10% of community [End Page 420] college students nationwide were at institutions that did not participate in the federal student loan program (Cochrane and Szabo-Kubitz 2016). Additionally, there is some evidence that students at community colleges that opt out of federal loans may attempt and complete fewer credits than students who take federal loans (Wiederspan 2015). Colleges that have opted out of federal loans have higher rates of minority and low-income student enrollment, factors that are correlated with high default rates (Hillman and Jaquette 2014). Holding other factors constant, for-profit institutions, which rely on student loans for a large percentage of total revenue, are more likely to have a high default rate than nonprofit institutions (Hillman 2015).

Colleges have responded to default rate pressures in a variety of ways. Many colleges have worked to help students avoid defaulting on their loans by improving entrance and exit counseling or making students aware of income-sensitive repayment options (McKibben et al. 2014). Darolia (2013) found that for-profit colleges that had a default rate just high enough to face sanctions had lower enrollment levels than colleges with slightly lower default rates, although it is unclear whether this is a result of students responding to notice of a sanction or a change in institutional admissions practices. However, using a similar comparison, Kelchen (2018) generally did not find that nonprofit or for-profit colleges reacted to a high default rate by changing their posted tuition prices or living allowances in an effort to affect student borrowing. Some for-profit colleges have also been accused of encouraging students to go into forbearance on their loans, which allows students to make smaller or no payments for up to a year in cases of financial hardship while interest continues to accumulate, in order to avoid defaulting with the three-year window (United States Senate Health, Education, Labor, and Pensions Committee 2012).

A key state-level accountability policy is performance-based funding (PBF), which is used in at least 35 states to tie a portion of total state appropriations for public institutions to outcome measures instead of the traditional methods of enrollment or historical allocations (Hillman, Fryar, and Crespin-Trujillo 2017). A number of researchers have examined whether PBF policies increase graduation rates or the number of graduates and have found mixed results across sectors, states, and durations of the policies (e.g. Hillman et al., 2017; Hillman, Tandberg, and Fryar 2015; Li and Kennedy, 2018; Rutherford and Rabovsky 2014; Tandberg and Hillman 2014). But only two studies have examined whether these policies have influenced colleges' fiscal priorities. Rabovsky (2012) examined the impact of state-level PBF programs from 1999 to 2000 and found that PBF programs had relatively small, but statistically significant, effects on institutional spending patterns in the expected direction. He found that the percentage of educational expenditures allocated to research dropped by 0.34 percentage points in states [End Page 421] where PBF policies were in effect, while instruction's portion of total allocations increased by 0.89 percentage points. Kelchen and Stedrak (2016) showed that colleges subject to PBF received less federal Pell Grant revenue than colleges not subject to PBF; this suggestive evidence that colleges might be changing their recruiting practices is supported by interviews with college leaders by Dougherty and Natow (2015).

An important source of private-sector accountability designed to reflect a college's financial health is its credit rating from agencies such as Moody's Investors Service and Standard & Poor's. Credit ratings reflect factors such as market power, tuition revenue, assets, liabilities, and liquidity which affect the interest rate and the cost of borrowing in private capital markets. For example, private colleges with the highest Standard and Poor's credit rating of A in 2016 derived only one-fourth of their revenue from tuition, compared to more than 80 percent for colleges rated A or lower (Seman and Matsumori 2017). Some publicly-traded for-profit colleges have also faced similar downgrades due to concerns over student performance, enrollment numbers, and federal accountability (Weise 2014).

Given that financial responsibility scores and credit ratings are both measures of a college's financial strength, I compared the 270 private nonprofit colleges rated by Moody's in 2011 with their financial responsibility scores from that year. The correlation between the Moody's credit ratings and financial responsibility scores was just 0.038 (p=.533).1 This lends support to claims by the professional association representing private nonprofit colleges that the Department of Education was calculating financial responsibility scores in a way inconsistent with current accounting standards (National Association of Independent Colleges and Universities 2012).

Of the 57 colleges with the maximum financial responsibility score of 3.0, only three colleges (Northwestern, Stanford, and Swarthmore) had the highest possible credit rating of Aaa. Twenty-five colleges with financial responsibility scores of 3.0 had credit ratings of Baa, seven to nine grades lower than Aaa. On the other hand, 6 of the 15 colleges with Aaa credit ratings (including Harvard and Yale) had financial responsibility scores of 2.2, well below the maximum possible score. While this suggests that credit ratings and financial responsibility scores focus on different aspects of a college's fiscal health, a limitation of this comparison is that all but three colleges that requested Moody's ratings in order to access the capital market had passing financial responsibility scores. The colleges most in danger of facing sanctions from the federal government are not requesting a credit rating, instead choosing to access capital through other channels (or not [End Page 422] at all).

sample, data, and methods

To answer my research questions, I constructed a panel dataset of colleges that had a financial responsibility score at least once between the 2006-07 and 2013–14 academic years. I then used a regression discontinuity estimation strategy to examine whether colleges altered their fiscal priorities in response to receiving a financial responsibility score that was not passing. The following section contains details about my sample, data, and methods.

Analytic Sample

My sample consists of all 4,073 private for-profit and nonprofit colleges in the 50 United States and the District of Columbia that were active during the period of the panel, were eligible to receive federal financial aid dollars, submitted data to the U.S. Department of Education's Integrated Postsecondary Education Data System (IPEDS), and had received a financial responsibility score at least once in this period. This includes 1,614 colleges that were ever nonprofit and 2,486 colleges that were ever for-profit, with a small number of colleges switching from nonprofit to for-profit or vice-versa during this period.2 A complicating factor in the analyses was that financial responsibility scores are assigned based on OPEIDs from Federal Student Aid, which can represent multiple campuses within a given system.3 If a system had multiple OPEIDs without each campus having a unique OPEID, I dropped the institutions that did not have an OPEID because it is unclear which financial responsibility score corresponds with each campus.4

Data Sources

In this study, I used all eight years of financial responsibility scores for private nonprofit and for-profit institutions (2006-07 through 2013-14) that were publicly available from the Office of Federal Student Aid within the U.S. Department of Education as of the writing of this paper. Although Federal Student Aid has created financial responsibility scores using the current methodology since at least 1996, they have declined to release previous scores due to alleged data quality concerns. I filed a Freedom of Information Act request in 2014 to get additional years of data, but the appeal was denied in 2017. [End Page 423]

Table 1 summarizes the distribution of financial responsibility scores by institutional sector and year in the analytic sample. A score below 0.9 results in a college failing the test and requiring a letter of credit to access funds and a score between 1.0 and 1.4 results in additional federal oversight, while a score at or above 1.5 results in passing the test. There was a spike in the percentage of private nonprofit institutions or systems failing the test in 2008-09 (which marked the beginning of the Great Recession), with the failure rate rising from 5% to [End Page 424]

Table 1. Distribution of financial responsibility scores by institutional control Notes: (1) Scores below 0.9 represent a failing score, while scores between 1.0 and 1.4 represent additional Department of Education oversight over colleges. (2) The "N" reflects the number of unique parent institutions receiving financial responsibility scores, as some child institutions share scores with a parent. All means are weighted at the OPEID level.
Click for larger view
View full resolution
Table 1.

Distribution of financial responsibility scores by institutional control

Notes:

(1) Scores below 0.9 represent a failing score, while scores between 1.0 and 1.4 represent additional Department of Education oversight over colleges.

(2) The "N" reflects the number of unique parent institutions receiving financial responsibility scores, as some child institutions share scores with a parent. All means are weighted at the OPEID level.

10% before declining to previous levels by 2010-11. In the for-profit sector, the failure rate was between 10% and 15% in most years, with no increase during the recession.

The economic downturn of 2008-09 did little to affect the percentage of colleges with financial responsibility scores in the oversight zone, as between three and seven percent of nonprofit and for-profit colleges fit in this band. However, there was an increase in the percentage of nonprofit colleges with scores between 1.5 and 1.9, reflecting a barely passing score. In 2008-09, 15% of nonprofits fell in this range, while just 26% of colleges received a score between 2.5 and 3.0; in all other years, a majority of private nonprofits were in the highest score range. For-profit colleges were typically closer to the margin of passing, with between 20% and 30% of colleges scoring between a 1.5 and 1.9 in each year.

The majority of institutional demographic and financial measures used in this study came from the Integrated Postsecondary Education Data System (IPEDS). I included the number of full time equivalent (FTE) undergraduate and graduate students enrolled and undergraduate headcount enrollment. Another characteristic of the student body is reflected by Pell Grant revenue per undergraduate FTE as a proxy for the percentage of financially needy students served.

The types of available revenue and expenditure measures (all considered on a per-FTE basis) varied somewhat between private nonprofit and for-profit institutions. Both sectors reported data on tuition and fee revenue and auxiliary enterprise revenue from sources such as housing, dining, and athletics, but I excluded auxiliary enterprises in the for-profit sector because few institutions reported positive values. Two additional measures for private nonprofit colleges were the amount of gift revenue received and the per-FTE endowment value at the end of the fiscal year.

Turning to expenditures, instructional expenditures were the only metric available for both types of colleges for the length of the panel. Student services (admissions, student affairs), academic support (libraries, curriculum development), and institutional support expenditures (information technology, administrative expenses) were examined separately for nonprofit colleges, while the three categories were only available as a combined category in the for-profit sector.5 I assumed that each institution sharing an OPEID with other colleges had the same per-FTE revenue and expenditure values if institution-level data were not available. Due to the presence of extreme values in the dataset for per-FTE revenue and expenditures, I trimmed (Winsorized) the top and bottom one percent of observations back to the 1st and 99th percentiles (Tukey 1962). [End Page 425]

The final key measures from IPEDS involved institutional grant aid (merit-based or need-based scholarships). Nonprofit colleges reported grant aid in two categories: funded grants (which come from the university endowment, gifts, or other designated sources) and unfunded grants (which represent tuition that a college waives without receiving revenue from any source). For-profit colleges only had unfunded grant aid, as they do not have endowments or private donors. The tuition discount rate (for private nonprofit colleges only) is then calculated by dividing the amount of unfunded grants by gross tuition revenue (before any discounts are applied).

Table 2 contains summary statistics on the above institutional characteristics and financial measures for the 3,772 colleges in my analytic sample that were active in the 2013-14 academic year, broken down by private nonprofit or for-profit status. During the period covered by the panel, a substantial percentage of colleges in the analytic sample received at least one financial responsibility score that subjected them to additional sanctions. Although over 95% of colleges in both sectors passed the test in at least one year, 24% of nonprofit and 45% of for-profit colleges either failed or were in the oversight zone at least once. About 14% of private nonprofit colleges failed the test at least once and 18% were in the oversight zone at least once, compared to 33% and 19% among for-profit colleges. The percentage of colleges that faced federal sanctions suggests that the fear of facing sanctions is real for many less-selective colleges, but there still may be important differences in institutional characteristics and resources among these institutions.

Methods

I used regression discontinuity techniques to examine whether colleges that did not receive a passing financial responsibility score reacted by changing institutional financial characteristics or enrollment in different ways than colleges that passed. For the purposes of this analysis, I combined the failing (scores of 0.9 or below) and oversight zone (scores between 1.0 and 1.4) categories due to a similar pattern of results across the two categories and a relatively small percentage of colleges in the oversight zone relative to failing.6 I then considered the time period between the fiscal year used to calculate the score and when colleges and other stakeholders would respond. Financial responsibility scores have not been released to the public until one to two years after the end of the fiscal year represented in the score.7 For example, the scores for the 2011-12 academic year [End Page 426]

Table 2. Summary statistics of the analytic sample (2013-2014 academic year) Sources: Federal Student Aid data (financial responsibility scores), Integrated Postsecondary Education Data System (all other measures). Notes: (1) Standard errors are clustered to account for multiple institutions sharing the same OPEID and financial responsibility score. (2) Not all categories are available for both for-profit and nonprofit colleges. (3) All financial values were Winsorized (trimmed) at the 1st and 99th percentiles. (4) The tuition discount rate reflects the percentage of tuition dollars a college gives back to students through grants not funded by the endowment or gifts.
Click for larger view
View full resolution
Table 2.

Summary statistics of the analytic sample (2013-2014 academic year)

Sources: Federal Student Aid data (financial responsibility scores), Integrated Postsecondary Education Data System (all other measures).

Notes:

(1) Standard errors are clustered to account for multiple institutions sharing the same OPEID and financial responsibility score.

(2) Not all categories are available for both for-profit and nonprofit colleges.

(3) All financial values were Winsorized (trimmed) at the 1st and 99th percentiles.

(4) The tuition discount rate reflects the percentage of tuition dollars a college gives back to students through grants not funded by the endowment or gifts.

[End Page 427] (fiscal year 2012) were released in February 2014.

I used two separate lag periods in my model to account for uncertainty regarding exactly when responses to a college's financial responsibility score would occur. I estimated my models using both a one-year and two-year lag between the year used to calculate the score and the year in which outcomes were measured. For example, financial responsibility scores from 2008-09 were matched up with outcomes from both 2009-10 (one-year lag) and 2010-11 (two-year lag). Colleges have to submit their financial data to the federal government within nine months of the end of their fiscal year in order to calculate a score, so they may know their score in order to affect some operations in the following fiscal year. However, a two-year lag period would likely be more plausible for metrics such as enrollment and gifts that are influenced by agents other than the institution.

I estimated models separately for nonprofit and for-profit institutions for several reasons. First, nonprofit and for-profit institutions have different fiscal incentives that could affect revenue and expenditures differently. While both types of institutions may seek to maximize total revenue, for-profit colleges must seek to maximize shareholder return by reducing total expenditures. The financial responsibility scores are calculated somewhat differently for the two types of institutions, meaning that nonprofit and for-profit institutions may reallocate resources differently to score higher in future years. It is also possible that for- profit colleges, with centralized governance structures and year-round operating schedules, may be able to react more quickly than private nonprofit colleges with strong shared governance principles and a traditional academic year calendar. Finally, there are different accounting standards used by the two types of colleges as well as different revenue and expenditure categories available for the two types of institutions.

The analytic model was the following, with separate regressions by sector:

inline graphic

where Outcome reflected the enrollment and financial measures of interest, as previously described. The key independent variable is FinFlag, which represented whether a college's financial responsibility score was below the passing threshold of 1.5 in period i (1 or 2 years ago). Since there is a debate in the empirical literature regarding both the order of polynomial terms to control for in regression discontinuity studies and the proper bandwidth from the threshold, I used multiple specifications regarding k-order polynomials. In my preferred specification, I generally followed methods used by Darolia (2013) and added first, second, and third-order polynomials for the financial responsibility score (recentered around 1.5), as well as interactions between the distance from the threshold and the 1.5 indicator to allow for a nonlinear flexible form. However, [End Page 428] other research has questioned whether higher-order polynomials are necessary (e.g., Gelman and Imbens 2014; Gelman and Zelizer 2015). As a result, I ran all models with second-order through fourth-order polynomials as a robustness check. Results for the other polynomial forms were generally similar and not presented here, but are available from the author upon request.

My models were estimated using institutional and year fixed effects, with standard errors clustered at the OPEID level to account for multiple colleges within the same system sharing the same financial responsibility score. I also estimated models without institutional fixed effects (instead including state and Carnegie classification fixed effects), but I chose to use institutional fixed effects for this analysis because it does not assume a zero correlation between the coefficient on the outcome measures and the error terms. Using random effects with state, Carnegie classification, and year fixed effects presupposes a correlation of zero, which is different from what the data suggest. Although the standard errors were slightly larger with institutional fixed effects, the general patterns of the results in the random effects models were similar.8 Additionally, all financial measures were adjusted to 2013 dollars using the Consumer Price Index.

I also ran my models using multiple bandwidths in order to explore how sensitive the results would be to different specifications. My primary specifications used all colleges with less weight given to colleges farther from the threshold through the use of polynomial terms, following Cellini, Ferreira, and Rothstein (2010) and Darolia (2013). As a robustness check, I also used score bandwidths between 0 and 2.5 (capturing about 45% of all observations at nonprofit colleges and 55% at for-profit colleges) and between 1 and 2 (about 15% of all observations at nonprofits and 30% at for-profits) in addition to the full bandwidth from-1 to 3.

I conducted two checks to determine whether regression discontinuity models were appropriate for this analysis. Following McCrary (2008), I first examined the density of the running variable by showing a histogram of the density of financial responsibility scores around the 1.5 threshold for a passing score. As Figure 1 shows, there is a sharp jump in the distribution of colleges right at the threshold. This is likely due to the Department of Education's longstanding policy to automatically place some colleges on heightened cash monitoring due to financial responsibility concerns without assigning them a score. If the types of colleges that failed the financial responsibility test without receiving a score were different than those that failed after receiving a score, this should result in sizable differences in the characteristics of colleges on either side of the passing threshold. [End Page 429]

Figure 1. Financial responsibility score distribution
Click for larger view
View full resolution
Figure 1.

Financial responsibility score distribution

As a second check, I tested for the continuity of the outcome metrics in the same year the financial responsibility scores were determined. For example, this would examine the relationship between not passing the financial responsibility test in 2013-14 with financial metrics from 2013-14. As the score is not finalized until the end of the fiscal year, there should not be a relationship present. I tested this by using the same regression model as described above, but with no lag present.

The results of the regressions (found in Table 3 for each of the three bandwidths) show some concerns with the two measures of enrollment (total FTEs and undergraduate headcount), as several specifications show colleges that did not pass the financial responsibility test had lower enrollment than colleges that passed the test. However, as only one of the financial metrics is statistically significant across any of the bandwidths at p<.05, there do not appear to be large baseline differences in financial characteristics among colleges that passed or did not pass the test in spite of the jump in the density of the financial responsibility score distribution right at the passing threshold of 1.5. This suggests that the regression discontinuity results are likely valid.

Limitations

My data do not fully capture a small number of institutions that either merged or closed during the period of the study or withdrew from federal financial aid programs (and hence the IPEDS dataset), and these institutions were more likely to have a failing financial responsibility score during this period. For example, all seven private nonprofit colleges whose last observed financial responsibility was in 2010 had received a failing score at least once in the previous five years. Some [End Page 430]

Table 3. Tests for continuity of pre-treatment outcome measures Sources: Federal Student Aid (financial responsibility score), IPEDS (all others). Notes: (1) These regressions test if not receiving a passing financial responsibility score (1.5) in a given year is associated with financial metrics in the same year--before the score is known or released. (2) All financial values are inflation-adjusted into 2013 dollars using the Consumer Price Index and were Winsorized (trimmed) at the 1st and 99th percentiles. (3) * signifies p&lt;.10, ** signifies p&lt;.05, and *** signifies p&lt;.01. (4) Regression estimates control for first, second, and third-order polynomials of the distance from the 1.5 score threshold and interactions with that distance and a binary variable at 1.5 as well as including institution and year fixed effects. (5) Standard errors are clustered to account for multiple institutions sharing the same OPEID and financial responsibility score. (6) Data on gift revenue are from 2007 forward (all other metrics are 2006 forward).
Click for larger view
View full resolution
Table 3.

Tests for continuity of pre-treatment outcome measures

Sources: Federal Student Aid (financial responsibility score), IPEDS (all others).

Notes: (1) These regressions test if not receiving a passing financial responsibility score (1.5) in a given year is associated with financial metrics in the same year--before the score is known or released.

(2) All financial values are inflation-adjusted into 2013 dollars using the Consumer Price Index and were Winsorized (trimmed) at the 1st and 99th percentiles.

(3) * signifies p<.10, ** signifies p<.05, and *** signifies p<.01.

(4) Regression estimates control for first, second, and third-order polynomials of the distance from the 1.5 score threshold and interactions with that distance and a binary variable at 1.5 as well as including institution and year fixed effects.

(5) Standard errors are clustered to account for multiple institutions sharing the same OPEID and financial responsibility score.

(6) Data on gift revenue are from 2007 forward (all other metrics are 2006 forward).

[End Page 431] colleges also automatically fail the financial responsibility test without being assigned a score for a failure to meet past performance standards such as failing to submit audited financial statements and are then placed on heightened cash monitoring sanctions that are similar to financial responsibility sanctions (Federal Student Aid 2014). As of December 2015, 337 colleges were under heightened cash monitoring due to financial responsibility scores (author's calculations using Federal Student Aid data), and many of these institutions did not receive a financial responsibility score during the period of my panel. Finally, the reporting of financial responsibility scores at the OPEID level instead of by UnitID in [End Page 432]

Table 4. RD estimates of colleges' responses to not passing the financial responsibility test (score below 1.5 in the given year) Sources: Federal Student Aid (financial responsibility score), IPEDS (all others). Notes: (1) These regressions test if not receiving a passing financial responsibility score (1.5) in a given year is associated with financial metrics in the same year--before the score is known or released. (2) All financial values are inflation-adjusted into 2013 dollars using the Consumer Price Index and were Winsorized (trimmed) at the 1st and 99th percentiles. (3) * signifies p&lt;.10, ** signifies p&lt;.05, and *** signifies p&lt;.01. (4) Regression estimates control for first, second, and third-order polynomials of the distance from the 1.5 score threshold and interactions with that distance and a binary variable at 1.5 as well as including institution and year fixed effects. (5) Standard errors are clustered to account for multiple institutions sharing the same OPEID and financial responsibility score. (6) Data on gift revenue are from 2007 forward (all other metrics are 2006 forward).
Click for larger view
View full resolution
Table 4.

RD estimates of colleges' responses to not passing the financial responsibility test (score below 1.5 in the given year)

Sources: Federal Student Aid (financial responsibility score), IPEDS (all others).

Notes:

(1) These regressions test if not receiving a passing financial responsibility score (1.5) in a given year is associated with financial metrics in the same year--before the score is known or released.

(2) All financial values are inflation-adjusted into 2013 dollars using the Consumer Price Index and were Winsorized (trimmed) at the 1st and 99th percentiles.

(3) * signifies p<.10, ** signifies p<.05, and *** signifies p<.01.

(4) Regression estimates control for first, second, and third-order polynomials of the distance from the 1.5 score threshold and interactions with that distance and a binary variable at 1.5 as well as including institution and year fixed effects.

(5) Standard errors are clustered to account for multiple institutions sharing the same OPEID and financial responsibility score.

(6) Data on gift revenue are from 2007 forward (all other metrics are 2006 forward).

some cases obscures the financial strengths and weaknesses of individual institutions. Clustering standard errors by OPEID helps account for this situation, but scores for individual campuses within a system would be preferable.

results

I examined whether not passing the financial responsibility test (by either failing outright or being placed in a zone of additional oversight) was associated with different patterns in student enrollment or financial characteristics. The results across three different bandwidths and one-year and two-year lags can be found in Panel A of Table 4 for private nonprofit colleges and Panel B of Table 4 for the for-profit college sector. [End Page 433]

In general and across bandwidths, private nonprofit colleges generally did not respond in systemic ways to receiving a low financial responsibility score. There were no significant differences in enrollment based on whether a college passed, which is in a sense not surprising because these scores can typically only be found in a large spreadsheet on an obscure portion of the Department of Education's website. Although students may not know about these scores, it is still possible that colleges could try to increase enrollment in an effort to increase effort; however, I found no evidence of that happening. There was also no evidence that colleges that did not pass had different amounts of per-student Pell Grant revenue, suggesting no changes to the college's socioeconomic mix.

Nonprofit colleges that did not receive a passing score received slightly less revenue from auxiliary enterprises, with no differences in tuition revenue (the largest revenue source for the majority of colleges). There was a statistically significant increase in gift revenue for the full sample using a one-year lag ($1,301, p<.05). However, this did not persist when using a two-year time lag, possibly indicating a fleeting increase in gifts after not passing the financial responsibility test that did not persist over time. There were no statistically significant differences at p<.05 in expenditures by passing status, suggesting that colleges were either unable or unwilling to cut costs in response to failing the test outright or being placed under additional oversight.

Turning to institutional grant aid, there were again few statistically significant differences between colleges that passed and those that did not pass. The lack of significance for funded grant aid is not too surprising, as colleges have to raise funds to increase this type of aid instead of choosing to forgo revenue. There were no significant results at p<.05 for unfunded grant aid, although the coefficients were consistently positive. This shows that colleges that did not pass were not trying to recruit students through additional grant aid. There was one statistically significant result for the tuition discount rate, as the discount rate declined slightly using a two-year time lag and the full sample.

Among for-profit colleges (Panel B of Table 4), the results were almost entirely insignificant. Despite the general perception that for-profit colleges are more agile than nonprofit colleges and are more able to respond to external pressures more quickly (e.g., Deming, Goldin, & Katz, 2012), I found little evidence that for-profit colleges responded to not passing the financial responsibility test by changing student enrollment, revenues, or expenditures. This is a surprising finding that implies that for-profit colleges were either not concerned about financial responsibility scores or that paying for a letter of credit was a better option than changing their day-to-day operations. [End Page 434]

conclusion

Financial responsibility scores are assigned to all private nonprofit colleges and for-profit institutions that wish to receive federal financial aid dollars and are designed to make colleges at risk of closing for financial reasons post collateral before receiving federal funds. This longstanding process reflects one of the few existing federal accountability efforts that directly tie financial aid eligibility to some measure of institutional performance, as colleges that do not pass the test outright are subjected to additional oversight or posting a letter of credit with the federal government in order to receive federal funds. Yet no prior academic research has examined the topic of financial responsibility scores, let alone whether the distribution of these scores affects institutional behaviors.

I used financial responsibility scores from the 2006-07 through the 2013-14 academic years to examine whether not receiving a passing financial responsibility score affects changes in nonprofit and for-profit colleges' revenue, expenditure, or student enrollment patterns in the following years. In general, I found that neither nonprofit nor for-profit colleges changed their behaviors in any substantive way after not passing. This could be a result of colleges not viewing financial responsibility scores as a high of a priority as other pressures such as meeting enrollment targets, or it could reflect colleges' inability to change their priorities over a relatively short period of time.

This represents the first effort to examine the implications of financial responsibility scores for private nonprofit and for-profit institutions, and as such several future studies are needed. Future work should carefully consider the cases of institutions that merged or closed as a result of receiving failing financial responsibility scores in one or more years, although they were included in this analysis as long as they continued to receive a score. In the case of buyouts or mergers, it is important to consider how the acquiring institution's financial responsibility score is affected by the acquired institution's score. Qualitative research in this area would be particularly helpful in understanding how receiving a low financial responsibility score affected colleges' decisions regarding their future, as well as providing insights about how colleges choose to strategically allocate their resources.

Additional research is necessary to determine the extent to which the financial responsibility scores determined by the U.S. Department of Education are reasonable proxies for an institution's overall financial health, particularly given the lack of correlation between Moody's credit scores and financial responsibility scores for private nonprofit colleges. This lends support to claims by the professional association representing private nonprofit colleges that the Department of Education was incorrectly calculating scores while making appeals difficult (National Association of Independent Colleges and Universities 2012). Future work should also explore the extent to which institutional behaviors are influenced by [End Page 435] changes in credit ratings, as these changes directly affect the cost of borrowing and thus have the potential to change institutional priorities.

Robert Kelchen

Robert Kelchen is Assistant Professor, Department of Education Leadership, Management and Policy, at Seton Hall University.

APPENDIX. calculating financial responsibility scores

The U.S. Department of Education calculates financial responsibility scores for all private nonprofit and for-profit colleges on an annual basis, using data from the prior fiscal year. (Public colleges have to submit financial statements each year, but are considered financially responsible on account of their support from a government entity.) Private nonprofit colleges have nine months from the end of their fiscal year to provide the data, while for-profits have six months (Federal Student Aid 2014). Since colleges have different fiscal year start and end dates, the fiscal year ending closest to June 30 is used. A college's financial responsibility score is calculated using three measures, which vary slightly between private nonprofit and for-profit colleges (United States Government Publishing Office n.d.). The three measures are the primary reserve ratio, the equity ratio, and the net income ratio.

The primary reserve ratio (30% of the score for proprietary institutions, 40% for private nonprofit institutions) is designed to reflect the amount of available liquid assets should a college run into the need for additional capital. It is calculated by dividing "adjusted equity" (defined as total net equity less intangible and physical assets plus long-term debt and liabilities) by total expenses for proprietary institutions and by dividing "expendable net assets" (defined as unrestricted and certain temporarily restricted net assets less intangible and physical assets plus long-term debt and liabilities) by total expenses for private nonprofit colleges.

The equity ratio (40% of the score for both proprietary and private nonprofit institutions) reflects a college's ability to borrow additional funds. It is calculated for proprietary colleges by dividing "modified equity" (total net equity less intangible assets and certain accounts receivable) by "modified assets" (total net assets less intangible assets and certain accounts receivable) and for nonprofit colleges by dividing "modified net assets" (total assets less intangible assets and certain accounts receivable) by the modified assets measure used for proprietary colleges.

Finally, the net income ratio (30% of the score for proprietary institutions, 20% for private nonprofit institutions) represents profitability or excess revenue. It is calculated as net income before taxes divided by total revenues for proprietary colleges and the change in unrestricted net assets in the last year divided by total unrestricted revenue for private nonprofit colleges. All of the measures used to calculate the net income ratio are directly reported on colleges' financial statements, while the primary reserve and equity ratios require additional calculations.

A college can score between -1 and 3 on each of the three measures (outlying values are trimmed), which are then combined using the above weights. Scores are rounded to the nearest one-tenth of a point before being made publicly available. Colleges that score over 1.5 are considered financially responsible without any qualifications and can access federal funds. Colleges scoring between 1.0 and 1.4 are considered financially responsible and can access federal funds for up to three years, but are subject to additional Federal Student Aid oversight of its financial aid programs. If a college does not improve its score within three years, it will not be considered financially responsible. Colleges scoring 0.9 or below are not considered financially responsible and must submit a letter of credit and be subject to additional cash monitoring oversight to get access to funds (Federal Student Aid 2014). A college can submit a letter of credit equal to 50% of all federal student aid funds received in the prior year and be deemed financially responsible, or it can submit a letter equal to 10% of all funds received and gain access to funds but still not be fully considered financially responsible.

references

Altringer, Levi, and Jeffrey Summers. "Is College Pricing Power Pro-Cyclical?" Research in Higher Education 56, no. 8 (2015): 777-792.
Cellini, Stephanie R., Fernando Ferreira, and Jesse Rothstein. "The Value of School Facility Investments: Evidence from a Dynamic Regression Discontinuity Design." The Quarterly Journal of Economics 125, no. 1 (2010): 215-261.
Cochrane, Debbie, and Laura Szabo-Kubitz. "States of Denial: Where Community College Students Lack Access to Federal Student Loans." Oakland, CA: The Institute for College Access and Success, 2016.
Darolia, Rajeev. "Integrity Versus Access? The Effect of Federal Financial Aid Availability on Post secondary Enrollment." Journal of Public Economics 106 (2013): 101-114.
Deming, David J., Claudia Goldin, and Lawrence. F. Katz. "The For-Profit Postsecondary School Sector: Nimble Critters or Agile Predators?" Journal of Economic Perspectives 26, no. 1 (2012): 139-164.
Federal Student Aid. "2014-2015 Federal Student Aid Handbook." Washington, DC: U.S. Department of Education, 2014.
Federal Student Aid. "Three-Year Official Cohort Default Rates for Schools." Last modified September 27, 2017. Accessed February 9, 2018. http://www2.ed.gov/offices/OSFAP/defaultman-agement/cdr.html.
Federal Student Aid. n.d. "Financial Responsibility Composite Scores." Accessed February 9, 2018. https://studentaid.ed.gov/about/data-center/school/composite-scores.
Gelman, Andrew, and Guido Imbens. "Why High-Order Polynomials Should Not be Used in Regression Discontinuity Designs." Cambridge, MA: National Bureau of Economic Research Working Paper 20405, 2014.
Gelman, Andrew, and Adam Zelizer. "Evidence on the Deleterious Impact of Sustained Use of Polynomial Regression on Causal Inference." Research & Politics 2, no. 1 (2015). doi:10.1177/2053168015569830.
Hauser, William J., and Tabitha M. Bailey. "Projections of Education Statistics to 2022: Forty-First Edition." National Center for Education Statistics Report 2014-051. Washington, DC: U.S. Department of Education, 2014.
Hillman, Nicholas W. "Cohort Default Rates: Predicting the Probability of Federal Sanctions." Educational Policy 29, no. 4 (2015): 559-582.
Hillman, Nicholas W., Alisa H. Fryar, and Valerie Crespin-Trujillo. "Evaluating the Impact of Performance Funding in Ohio and Tennessee" American Educational Research Journal. 2017. doi:10.3102/0002831217732951.
Hillman, Nicholas W., and Ozan Jaquette. "Opting Out of Federal Student Loan Programs: Examining the Community College Sector." Paper presented at the Association for Education Finance and Policy annual conference, 2014.
Hillman, Nicholas W., David A. Tandberg, and Alisa H. Fryar. "Evaluating the Impacts of 'New' Performance Funding in Higher Education" Educational Evaluation and Policy Analysis 37, no. 4 (2015): 501-519.
Jaquette, Ozan, and Edna E. Parra. "Using IPEDS Data for Panel Analyses: Core Concepts, Data Challenges, and Empirical Applications." In Higher Education: Handbook of Theory and Research (Vol. 29), edited by Michael B. Paulsen. Dordrecht, the Netherlands: Springer, 2014.
Kelchen, Robert. "Do High Cohort Default Rates Affect Student Living Allowances and Debt Burdens? An Institutional Analysis." Working paper, 2018.
Kelchen, Robert. "Financial Need and Income Volatility among Students with Zero Expected Family Contribution." Journal of Student Financial Aid 44, no. 3 (2015): 179-201.
Kelchen, Robert. "Higher Education Accountability." Baltimore, MD: Johns Hopkins University Press, 2018.
Kreighbaum, Andrew. "DeVos Allows Career Programs to Delay Disclosure to Students." Inside Higher Ed. Last modified July 3, 2017. Accessed February 6, 2018. https://www.insidehighered.com/news/2017/07/03/education-department-announces-new-delays-gainful-employment.
Kelchen, Robert, and Luke J. Stedrak. "Does Performance-Based Funding Affect Colleges' Financial Priorities?" Journal of Education Finance 41, no. 3 (2016): 302-321.
Li, Amy Y., and Alec I. Kennedy. "Performance Funding Policy Effects on Community College Outcomes: Are Short-Term Certificates on the Rise?" Community College Review 46, no. 1 (2018): 3-39.
McCrary, Justin. "Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test." Journal of Econometrics 142, no. 2 (2008): 698-714.
McCreary, Katy Hopkins. "Private College Tuition Discounts Hit Historic Highs Again." National Association of College and University Budget Officers. Last modified May 15, 2017. Accessed February 6, 2018. http://www.nacubo.org/About_NACUBO/Press_Room/Private_College_Tuition_Discounts_Hit_Historic_Highs_Again.html.
McKibben, Bryce, Matthew La Rocque, and Debbie Cochrane. "Protecting Colleges and Students: Community College Strategies to Prevent Default." Washington, DC: Association of Community College Trustees and The Institute for College Access and Success, 2014.
Moody's Investors Service. "US Higher Education Sector Outlook Revised to Negative as Revenue Growth Prospects Soften." Last modified December 5, 2017. Accessed February 6, 2018. https://www.moodys.com/research/Moodys-US-higher-education-sector-outlook-revised-to-negative-as--PR_376587.
NACUBO-Commonfund Study of Endowments. 2017. "Public NCSE Tables" Accessed February 6, 2018. http://www.nacubo.org/Research/NACUBO-Commonfund_Study_of_Endowments/Public_NCSE_Tables.html.
National Association of Independent Colleges and Universities. "Report of the NAICU Financial Responsibility Task Force." Washington, DC: Author, 2012.
National Student Clearinghouse Research Center. "Current Term Enrollment Estimates--Fall 2017." Last modified December 19, 2017. Accessed February 6, 2018. https://nscresearchcenter.org/current-term-enrollment-estimates-fall-2017/.
Rabovsky, Thomas. "Accountability in Higher Education: Exploring Impacts on State Budgets and Institutional Spending Patterns." Journal of Public Administration Research and Theory 22 (2012), 675-700.
Rutherford, Amanda, and Thomas Rabovsky. "Evaluating Impacts of Performance Funding Policies on Student Outcomes in Higher Education." The ANNALS of the American Academy of Political and Social Science 655, no. 1 (2014): 185-208.
Seman, Jamie L., and Jessica A. Matsumori. "U.S. Not-For-Profit Private Universities Fiscal 2016 Median Ratios: A Stable Sector Despite Uncertainties." San Francisco, CA: Standard and Poor's, 2017.
Tandberg, David A., and Nicholas W. Hillman. "State Higher Education Performance Funding: Data, Outcomes, and Policy Implications." Journal of Education Finance 39, no. 3 (2014): 22243.
Tukey, John W. "The Future of Data Analysis." The Annals of Mathematical Statistics 33, no. 1 (1962): 1-67.
Student Assistance General Provisions. Title 34, Subtitle B, Chapter VI, §668.15. Accessed July 15, 2016. http://www.ecfr.gov/cgi-bin/text-idx?rgn=div5&node=34:3.1.3.1.34.
United States Senate Committee on Health, Education, Labor, and Pensions. "For-Profit Higher Education: The Failure to Safeguard the Federal Investment and Ensure Student Success." Washington, DC: Government Printing Office, 2012.
Weise, Karen. "S&P Says Weakened For-Profit Colleges Have a Grim Future." Bloomberg BusinessWeek. Last modified August 27, 2014. Accessed February 6, 2018. http://www.businessweek.com/articles/2014-08-27/weak-student-outcomes-threaten-for-profit-schools-says-s-and-p.
Wiederspan, Mark. "Denying Loan Access: The Student-Level Consequences When Community Colleges Opt Out of the Stafford Loan Program." Economics of Education Review 51 (2015): 79-96.

Footnotes

1. Additional details are available upon request from the author.

2. Because my analyses are separated by institutional type, a small number of colleges appear in both analyses but in different years.

3. For more details on how to treat OPEIDs for analyses using IPEDS data, see Jaquette and Parra (2014).

4. A complete list of which campuses and systems were excluded for not having unique OPEIDs is available upon request from the author.

5. Beginning in the 2013-14 academic year, for-profit colleges are required to separately report student services, academic support, and institutional support expenditures. I collapsed those expenditures into the previous combined category in my analyses.

6. Robustness checks run separately by whether a college failed or was in the additional oversight zone are available upon request from the author.

7. This was examined using news coverage from the Chronicle of Higher Educations archives.

8. Full results are available upon request from the author.

Additional Information

ISSN
1944-6470
Print ISSN
0098-9495
Pages
417-439
Launched on MUSE
2018-11-10
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.