Russell Sage Foundation
Abstract

In this paper we explore the community college (institutional) effect on student outcomes in the nation’s largest public two-year higher education system—the California Community College system. We investigate whether there are significant differences in student outcomes across community college campuses after adjusting for observed student differences and potential unobserved determinates that drive selection. To do so, we leverage a unique administrative dataset that links community college students to their K–12 records in order to control for key student inputs. We find meaningful differences in student outcomes across California’s Community Colleges, after adjusting for differences in student inputs. We also compare college rankings based on unadjusted mean differences with college rankings adjusted for student inputs. Our results suggest that policymakers wishing to rank schools based on quality should adjust such rankings for differences in student-level inputs across campuses.

Keywords

community colleges, college quality, transfer

Identifying college quality has been a key element of the Obama administration’s efforts to increase accountability in higher education. In 2013, the White House launched the College Scorecard with the goal of providing students and their families information about the “cost, value, and quality” of specific colleges in order to make more informed decisions (U.S. Department of Education 2015). Beyond transparency, the administration is also pushing for performance-based funding in higher education (White House 2013). Specifically, President Obama’s proposal aims, by 2018, to tie federal aid to a rating system of colleges based on affordability, student completion rates, and graduate earnings.

Much discussion has been had on these ratings, and has included skepticism about the quality of the data used for the ratings and whether, as the president of the University of California system Janet Napolitano states, “criteria can be developed that are in the end meaningful” (Anderson 2013). Admittedly, policymakers have recognized the host of issues in developing the accountability metrics, and have solicited feedback on the college ratings methodology.

Among the many critiques of the rating systems is whether it is reasonable to compare institutions that are quite different from one another in terms of the institutional goals and the student populations served. Some have noted that even if scorecard rankings are adjusted for institutional or individual differences across campuses, biases will still favor elite institutions and institutions that serve [End Page 174] more traditional college students (Gross 2013). Relatedly, others worry that a rating system, particularly one tied to performance is “antithetical” to the open access mission of community colleges (Fain 2013).

The idea of performance-based accountability may be novel in higher education, but in K–12 it has been at the heart of both federal and state accountability systems, which developed—albeit to varying success—structures to grade K–12 schools on a variety of performance measures. Long before state and federal accountability systems took hold, school leaders and the research community were preoccupied with understanding the unique effects of schools on individual outcomes. Nearly fifty years after the Coleman Report, many scholarly efforts have been made to isolate the specific contribution of schools on student outcomes, controlling for individual and family characteristics.

Several studies since this canonical report, which concluded that the differences between K–12 schools account for only a small fraction of differences in pupil achievement, find that school characteristics explain less than 20 percent of the variation in student outcomes, though one study concludes that as much as 40 percent is attributable to schools, even after taking into account students’ family background (Startz 2012; Borman and Dowling 2010; Rumberger and Palardy 2005; Rivkin, Hanushek, and Kain 2005; Goldhaber et al. 2010). In higher education, however, school effects have primarily focused on college selectivity, or have been constrained by existing aggregate data and small samples.

In this paper, we explore the community college (institutional) effect on student outcomes in the nation’s largest public two-year higher education system—the California Community College system. We seek to know whether differences in student outcomes across community college campuses are significant after adjusting for observed student differences and potential unobserved determinates that drive selection. Additionally, we ask whether college rankings based on unadjusted mean differences across campuses provide meaningful information. To do so, we leverage a unique administrative dataset that links community college students to their K–12 records to control for key student inputs.

Results show that differences in student outcomes across the 108 California Community Colleges in our sample, after adjusting for differences in student inputs, are meaningful. For example, our lower-bound estimates show that going from the 10th to 90th percentile of campus quality is associated with a 3.68 (37.3 percent) increase in student transfer units earned, an 0.14 (20.8 percent) increase in the probability of persisting, an 0.09 (42.2 percent) increase in the probability of transferring to a four-year college, and an 0.08 (26.6 percent) increase in the probability of completion. We also show that college rankings based on unadjusted mean differences can be quite misleading. After adjusting for differences across campus, the average school rank changed by over thirty ranks. Our results suggest that policymakers wishing to rank schools based on quality should adjust such rankings for differences in student-level inputs across campuses.

BACKGROUND

Research on college quality has focused largely on more selective four-year colleges and universities, and on the relationship between college quality and graduates’ earnings. Reasons for students wanting to attend elite private and public universities are sound. More selective institutions appear to have a higher payoff in terms of persistence to degree completion (Alon and Tienda 2005; Bowen, Chingos, and McPherson 2009; Small and Winship 2007; Long 2008), graduate or professional school attendance (Mullen, Goyette, and Soares 2003), and earnings later in life (Black and Smith 2006; Hoekstra 2009; Long 2008; Monks 2000). However, empirical work on the effect of college quality on earnings is a bit more mixed (Brand and Halaby 2006; Dale and Krueger 2002; Hoekstra 2009; Hoxby 2009).

The difficulty in establishing a college effect results from the nonrandom selection of students into colleges of varying qualities (Black and Smith 2004). Namely, the characteristics that lead students to apply to particular colleges may be the same ones that lead to better postenrollment outcomes. Prior work has addressed this challenge largely through conditioning [End Page 175] on key observable characteristics of students, namely, academic qualifications. To more fully address self-selection, Stacy Dale and Alan Krueger (2002, 2012) adjust for the observed set of institutions to which students submitted an application. They argue that the application set reflects students’ perceptions, or “self-revelation,” about their academic potential (2002); students who apply to more selective colleges and universities do so because they believe they can succeed in such environments. They find relatively small differences in outcomes between students who attended elite universities and those who were admitted but chose to attend a less selective university. Jesse Cunha and Trey Miller (2014) examine institutional differences in student outcomes across Texas’s thirty traditional four-year public colleges. Their results show that controlling for student background characteristics (race, gender, free lunch, SAT score, and so on), the quality of high school attended, and application behavior significantly reduces the mean differences in average earned income, persistence and graduation across four-year college campuses. However, recent papers that exploit a regression discontinuity approach in the probability of admissions find larger positive returns to attending a more selective university (Hoekstra 2009; Anelli 2014).

Community colleges are the primary point of access to higher education for many Americans, yet research on quality differences between community colleges has been scant. The multiple missions and goals of community colleges have been well documented in the academic literature (Rosenbaum 2001; Dougherty 1994; Grubb 1991; Brint and Karabel 1989). Community colleges have also captured the attention of policymakers concerned with improving workforce shortages and the overall economic health of the nation (see The White House 2010). The Obama administration identified community colleges as key drivers in the push to increase the stock of college graduates in the United States and to raise the skills of the American workforce. “It’s time to reform our community college so that they provide Americans of all ages a chance to learn the skills and knowledge necessary to compete for the jobs of the future,” President Obama remarked at a White House Summit on Community Colleges.

The distinct mission and open access nature of community colleges and the diverse goals of the students they serve make it difficult to assess differences in quality across campuses. First, it is often unclear which outcomes should actually be measured (Bailey et al. 2006). Moreover, selection issues into community colleges may differ from those between four-year institutions. Nevertheless, community college quality has been a key component of the national conversation about higher education accountability. This paper is not the first to explore institutional quality differences among community colleges. A recent study explored variation in success measures across North Carolina’s fifty-eight community colleges, and finds that conditional on student differences, colleges were largely indistinguishable from one another in degree receipt or transfer coursework, save for the differences between the very top and very bottom performing colleges (Clotfelter et al. 2013). Other efforts have looked at the role of different institutional inputs as proxies for institutional quality. In particular, Kevin Stange (2012) exploits differences in instructional expenditures per student across community colleges and finds no impact on student attainment, degree receipt, or transfer. This finding corroborates with Juan Calcagno and his colleagues (2008), though they identify several other institutional characteristics that do influence student outcomes. Specifically, larger enrollment, more minority students, and more part-time faculty are associated with lower degree attainment and lower four-year transfer rates (Calcagno et al. 2008).

In this paper, we explore institutional effects of community colleges in the state with the largest public two-year community college system, using a unique administrative dataset that links students’ K–12 data to postsecondary schooling at community college.

Setting

California is home to the largest public higher education system, including its 112-campus community college system. Two-thirds of all California college students attend a community [End Page 176] college. The role of community colleges as a vehicle in human capital production was the cornerstone of California’s 1960 Master Plan for Higher Education, which stipulated that the California community college system will admit “any student capable of benefiting from instruction” (State of California 1960).1 Over the years, the system has grown and its schools have been applauded for remaining affordable, open access institutions. However, the colleges are also continually criticized for producing weak outcomes, in particular low degree receipt and transfer rates to four-year institutions (Shulock and Moore 2007; Sengupta and Jepsen 2006).

Several years before Obama’s proposed college scorecard, California leaders initiated greater transparency and accountability in performance through the Student Success Act, signed into law by Governor Brown in 2012. Among the components of this act is an accountability scorecard, the Student Success Scorecard, that tracks several key dimensions in student success: remedial course progression rate; persistence rates; completion of a minimum of thirty units (roughly equivalent to one year of full-time enrollment status); sub-baccalaureate degree receipt and transfer status, and certificate, degree or transfer among career and technical educationn (CTE) students. This scorecard is not focused on comparing institutions, rather on performance improvement over time within institutions. Nevertheless, policymakers desire critical information about the effectiveness of the post-secondary system to improve human capital production in the state and to increase post-secondary degree receipt.

In 2013, the community college system in California (CCC) served more than 2.5 million students from a tremendous range of demographic and academic backgrounds. California’s community colleges are situated in urban, suburban, and rural areas of the state, and their students come from public high schools that are both among the best and among the worst in the nation. California is an ideal state to explore institutional differences at community colleges because of the large number of institutions present, and because of the larger governance structure of the CCC system and its articulation to the state’s public four-year colleges. Moreover, the diversity of California’s community college population reflects the student populations of other states in the United States and the mainstream public two-year colleges that educate them. Given the diversity of California’s students and public schools, and the increasing diversity of students entering the nation’s colleges and universities,2 we believe that other states can learn important lessons from California’s public postsecondary institutions.

RESEARCH DESIGN

To explore institutional differences between community colleges, we use an administrative dataset that links four cohorts of California high school juniors to the community college system. These data were provided by the California Community College Chancellor’s Office and the California Department of Education. Because California does not have an individual identifier that follows students from K–12 to postsecondary schooling, we linked all transcript and completion data for four first-time [End Page 177] freshmen fall-semester cohorts (2004–2008) age seventeen to nineteen enrolled at a California community college with the census of California eleventh-grade students with standardized test score data. The match, performed on name and birth date, high school attended, and cohort, initially captured 69 percent of first-time freshmen ages seventeen through nineteen enrolled at a California community college (consistent with similar studies conducted by the California Community College Chancellor’s Office matched to K–12 data).3

The California Community Colleges is an open access system, one in which any student can take any number of courses at any time, including, for example, while enrolled in high school, or the summer before college for those who intend to start as first-time freshman at a four-year institution. In addition, community colleges serve multiple goals, including facilitating transfer to four-year universities, sub-baccalaureate degree and certificate, career and technical education, basic skills instruction, and supporting lifelong learning. We restrict the sample for our study to first-time freshman at the community college, of traditional age. We built cohorts of students who started in the summer or fall within one year of graduating high school, who attempted more than two courses (six units) in their first year, and had complete high school test and demographic information. This sample contains 254,865 students across 108 California community college campuses.4

Measures

We measure four outcomes intended to capture community college success in the short term through credit accumulation and persistence into year two, as well as through degree-certificate receipt and four-year transfer. First, we measure how many transferrable units a student completes during the first year. This includes units that are transferrable to California’s public four-year universities (the University of California system and the California State University system) that were taken at any community college. Second, we measure whether a student persists to the second year of community college. This outcome indicates whether a student attempts any units in the fall semester after the first year at any community college in California. Third, we measure whether a student ever transfers to a four-year college. Using National Student Clearinghouse data that the CCC Chancellor’s office linked with their own data, we are able to tell whether a student transferred to a four-year college at any point after attending a California community college. Last, we measure degree-certificate completion at a community college. This measure indicates whether a student earned an AA degree, or a sixty-unit certificate, or transferred to a four-year university. These outcomes represent only a few of the community college system’s many goals, and as such are not meant to be an exhaustive list of how we might examine community college quality or effectiveness.

Our data are unique in that we have the ability to connect a student’s performance and outcomes at community college with his or her high school data. As community colleges are open access, students do not submit transcripts from their high school, and have not necessarily taken college entrance exams such as the SAT or ACT to enter. As a result, community colleges often know very little about their students’ educational backgrounds. Researchers interested in understanding the community college population often face the same constraints. Examining the outcomes of [End Page 178] community colleges without considering the educational backgrounds of the students enrolling in that college may confound college effects with students’ self-selection.

To address ubiquitous selection issues, we adjust our estimates of quality for important background information about a student’s high school academic performance. We measure a student’s performance on the eleventh grade English and mathematics California Standardized Tests (CSTs).5 We also determine which math course a student took in eleventh grade. In addition, we measure race-ethnicity, gender, and parent education levels from the high school file as sets of binary variables.

To account for high school quality, we include the Academic Performance Index (API) of high school attended. Importantly, as students are enrolling in community college, they are asked about their goals for attending community college. Students can pick from a list of fifteen choices, including transfer with an associate’s degree, transfer without an associate’s degree, vocation certification, discover interests, improve basic skills, undecided, and others. We include students’ self-reported goals as an additional covariate for their post-secondary degree intentions. Last, we add additional controls for college-level by cohort means of our individual characteristics (eleventh grade CST math and English scores, race-ethnicity, gender, parental education, API, and student goal). Table 1 includes descriptive statistics on all of our measures at the individual level; table 2 includes descriptive statistics at the college level.6

Empirical Methods

We begin by examining our outcomes across the community colleges in our sample. Figure 1 presents the distribution of total transfer units, proportion persisting to year 2, proportion transfer, and proportion completing across our 108 community colleges. To motivate the importance of accounting for student inputs, we plot each outcome against students’ eleventh grade math test scores at the college level (figure 2).

From these simple scatterplots it is clear that average higher student test scores are associated with better average college outcomes. However, we also note considerable variation in average outcomes for students with similar high school test scores.

To examine whether there are significant differences in quality across community college campuses, we estimate the following linear random effects model:

inline graphic

where Yiscty is our outcome variable of interest (transfer units earned, persistence into year two, transfer to a four-year institutions, or degree-certificate completion) for individual i, from high school s, who is a first-time freshman enrolled at community college c, in term t in year y; xi is a vector of individual-level characteristics (race-ethnicity, gender, parental education, and eleventh grade math and English language arts test scores), inline graphic are community college by cohort means of xi, and ws is a measure of the quality of the high school (California’s API score)7 attended for each individual. [End Page 179]

Table 1. Sample Descriptive Statistics (n=254,865)
Click for larger view
View full resolution
Table 1.

Sample Descriptive Statistics (n=254,865)

[End Page 180]

Table 2. Sample Descriptive Statistics by College (n=108)
Click for larger view
View full resolution
Table 2.

Sample Descriptive Statistics by College (n=108)

[End Page 181]

Figure 1. Distribution of Outcomes by College Source: Authors’ calculations based on data from the California Community College Chancellor’s Office.
Click for larger view
View full resolution
Figure 1.

Distribution of Outcomes by College

Source: Authors’ calculations based on data from the California Community College Chancellor’s Office.

Figure 2. Average College Outcomes Against Students’ Eleventh Grade Math Test Scores Source: Authors’ calculations based on data from the California Community College Chancellor’s Office.
Click for larger view
View full resolution
Figure 2.

Average College Outcomes Against Students’ Eleventh Grade Math Test Scores

Source: Authors’ calculations based on data from the California Community College Chancellor’s Office.

[End Page 182]

And εiscty is the individual-level error term.

The main parameter of interest is the community college random effect, ζc.8 We estimate inline graphic using an empirical Bayes shrinkage estimator to adjust for reliability. The empirical Bayes estimates are best linear unbiased predictors (BLUPs) of each community college’s random effect (quality), which takes into account the variance (signal to noise) and the number of observations (students) at each college campus. Estimates of ζc with a higher variance and a fewer number of observations are shrunk toward zero (Rabe-Hesketh and Skrondal 2008).

The empirical Bayes technique is commonly used in measuring the quality of hospitals (Dimick, Staiger, and Birkmeyer 2010), schools or neighborhoods (Altonji and Mansfield 2014), and teachers (Kane, Rockoff, and Staiger 2008; Carrell and West 2010). In particular, we use methodologies similar to those recently used in the literature to rank hospital quality, which shows the importance of adjusting mortality rates for patient risk (Parker et al. 2006) and statistical reliability (caseload size) (Dimick, Staiger, and Burkmeir 2010). In our context, we similarly adjust our college rankings for “student risk” (such as student preparation, quality, and unobserved determinants of selection) as well as potential noise in our estimates driven by differences in campus size and student population.

RESULTS

Are there measured differences in college outcomes?

Because we are interested in knowing whether student outcomes differ across community college campuses, we start by examining whether variation in our estimates of inline graphic ’s for our various outcomes of interest is significant. Table 3 presents results of the estimated variance, inline graphic , in our college effects for various specifications of equation (1). High values of inline graphic indicate there is significant variation in student outcomes across community college campuses, while low values of inline graphic would indicate that there is little difference in student outcomes across campuses (that is, no difference in college “quality”).

In row 1, we start with the most naïve estimates, which include only a year-by-semester indicator variable. We use these estimates as our baseline model for comparative purposes and consider this to be the upper bound of the campus effects. These unadjusted estimates are analogous to comparing means (adjusted for reliability) in student outcomes across campuses. Estimates of inline graphic in row 1 show considerable variation in mean outcomes across California’s community college campuses.

For ease of interpretation, we discuss these effects in standard deviation units. For our transfer units completed outcome in column 1, the estimated variance in the college effect of 4.86 suggests that a one standard deviation difference in campus quality is associated with an average difference of 2.18 transfer units completed in the first year for each student at that campus. Likewise, variation across campuses in our other three outcome measures is signficant. A one standard deviation increase in campus quality is associated with a 6.3 percentage point increase in the probability of persisting to year two ( inline graphic ), a 7.3 percentage point increase in the probability of transferring to a four-year college ( inline graphic ), and a 7.3 percentage point increase in the probability of completion ( inline graphic ).9 One potential concern is that our estimates of inline graphic may be biased due to differences in student quality (aptitude, motivation, and so on) across campuses. That is, the mean differences in student outcomes across campuses that we measure in row 1 may not be due to real differences in college quality, but rather to differences (observable or unobservable) in student-level [End Page 183] inputs (such as ability). To highlight this potential bias, figure 2 shows considerable variation across campuses in our measures of student ability. The across campus standard deviation in eleventh grade CST math and English scores is 0.25 and 0.27 standard deviation, respectively.

Table 3. Regression Results from Random Effects Models
Click for larger view
View full resolution
Table 3.

Regression Results from Random Effects Models

Therefore, in results shown in rows 2 through 5 of table 3, we sequentially adjust our estimates of inline graphic for a host of student-level covariates. This procedure is analogous to the hospital quality literature that calculates “risk adjusted” mortality rates by controlling for patient observable characteristics (Dimick, Staiger, and Birkmeyer 2010). Results in row 2 control for eleventh grade math and English standardized test scores. Row 3 additionally controls for our vector of individual-level demographic characteristics (race-ethnicity, gender, and parental education level). Results in row 4 add a measure of student motivation, which is an indicator for student’s reported goal to transfer to a four-year college. Finally, in row 5 we add a measure of the quality of the high school that each student attended, as measured by California’s API score.

The pattern of results in rows 2 through 5 suggests that controlling for differences in student-level observable characteristics accounts for some, but not all of the differences in student outcomes across community colleges. Results for our transfer units earned outcome in column 1 show that the estimated variance in the college effects shrinks by 37 percent when going from our basic model to the fully saturated model. Despite this decrease, there still remains considerable variation in our estimated college effects, with a one standard deviation increase in campus quality associated with a 1.73 increase in the average number of transfer units completed by each student ( inline graphic ).

Examining results for our other three outcomes of interest, we find that controlling for student-level covariates shrinks the estimated variance in college quality by 26 percent for our persistence outcome, 70 percent for our transfer outcome, and 60 percent for completion. Again, despite these rather large decreases in the variance of the estimated college effects, considerable variation remains in student outcomes across campuses. A one standard deviation increase in college quality is associated with a 0.053 increase in the probability of persisting ( inline graphic ), a 0.039 increase in the probability of transferring ( inline graphic ), and a 0.045 increase in the probability of completion ( inline graphic ). Graphical representations of the BLUPs from model 5 are presented in figure 3.

Although the estimates shown in row 5 control for a rich set of individual-level observable characteristics, there remains potential concern that our campus quality estimates may still be biased due to selection on unobservables that are correlated with college choice (Altonji, Elder, and Tabor 2005). To directly address this concern, recent work by Joseph Altonji and Richard Mansfield (2014) shows that [End Page 184] controlling for group averages of observed individual-level characteristics adequately controls for selection on unobservables and provides a lower bound of the estimated variance in school quality effects.10

Figure 3. Ranked College Effects by Outcome Source: Authors’ calculations based on data from the California Community College Chancellor’s Office.
Click for larger view
View full resolution
Figure 3.

Ranked College Effects by Outcome

Source: Authors’ calculations based on data from the California Community College Chancellor’s Office.

Therefore, in results shown in row 6 we additionally control for college by cohort-level means of our individual characteristics (eleventh grade CST math and English scores, race-ethnicity, gender, parental education and API score). We find that controlling for college-level covariates shrinks the estimated variance in college quality over the naïve model (model 1) by 39 percent for transfer units, 36 percent for our persistence outcome, 71 percent for our transfer outcome, and 64 percent for completion. Model 5 remains our preferred specification, however, even in this highly specified model, we still find considerable variation in student outcomes across community college campuses.

Exploring Campus Ranking

Given recent proposals by the Obama administration to create a college scorecard, it is particularly critical to determine how stable (or unstable) our college quality estimates, inline graphic , are across specifications with various control variables. On the one hand, if our naïve estimates in row 1 result in a similar rank ordering of colleges as the fully saturated estimates in rows 5 and 6, then scorecards based on unadjusted mean outcomes will provide meaningful information to prospective students. On the other hand, if the rank ordering of the estimated inline graphic ,’s are unstable across specifications, it is critical that college scorecards be adjusted for various student-level inputs.11 [End Page 185]

Figure 4. Unadjusted College Effects Compared to Adjusted Effects for Transfer Units in First Year Source: Authors’ calculations based on data from the California Community College Chancellor’s Office.
Click for larger view
View full resolution
Figure 4.

Unadjusted College Effects Compared to Adjusted Effects for Transfer Units in First Year

Source: Authors’ calculations based on data from the California Community College Chancellor’s Office.

To help answer this question, we examine how the rank ordering of our college quality estimates change after controlling for our set of observable student characteristics. Figure 4 graphically presents the unadjusted and adjusted estimated college quality effects for our transfer unit outcome (our preferred specification model 5 from table 3).

The squares represent the unadjusted effects, and the dots the effects and 95 percent confidence intervals after adjusting for student-level covariates. This graph highlights two important findings: schools at the very bottom and very top end of the quality distribution tend to stay at the bottom and top of the rankings, and movement up and down in the middle of the distribution is considerable. This result indicates that unadjusted mean outcomes may be valuable in predicting the very best and very worst colleges, but they likely do a poor job in predicting the variation in college quality in the middle of the distribution. The same pattern can be noted in the other outcomes not pictured.

In a more detailed look at how the rankings of college quality change when adjusting for student-level covariates, figure 5 plots rank changes in transfer units in the first year by campus. This graph show that the rank ordering of campuses change considerably after controlling for covariates. The average campus changed plus or minus thirty ranks, the largest positive change being seventy-five and the largest drop, negative forty-nine.

These results highlight the importance of controlling for student-level inputs when estimating college quality. They also throw caution to policymakers who may be tempted to rank colleges based on unadjusted mean outcome measures such as graduation rates or post-graduation wages.

CONCLUSION

Understanding quality differences among educational institutions has been a preoccupation of both policymakers and social scientists for more than half a century (Coleman 1966). It is well established that individual ability and socioeconomic factors bear a stronger relation to academic achievement than the school attended. In fact, when these factors are statistically controlled for, it appears that differences between schools account for only a small fraction of differences in pupil achievement. Yet the influence of institutional quality differences in the postsecondary setting, particularly [End Page 186] at the less selective two-year sector, where the majority of Americans begin their postsecondary schooling, has rarely been explored.

Figure 5. Change in Rank from Unadjusted to Fully Specified Model Source: Authors’ calculations based on data from the California Community College Chancellor’s Office. Note: Colleges ordered by unconditional rank.
Click for larger view
View full resolution
Figure 5.

Change in Rank from Unadjusted to Fully Specified Model

Source: Authors’ calculations based on data from the California Community College Chancellor’s Office. Note: Colleges ordered by unconditional rank.

To help fill this gap, we use data from California’s Community College System to examine whether differences in student outcomes across college campuses are significant. Our results show considerable differences across campuses in both short-term and longer-term student outcomes. However, much of these differences are accounted for by student inputs, namely measured ability, demographic characteristics, college goals, and unobservables that drive college selection. Nevertheless, after controlling for these inputs, our results show that important differences between colleges remain. What is the marginal impact of being at a better quality college? Our lower-bound estimates indicate that going from the 10th to 90th percentile of campus quality is associated with a 3.68 (37.3 percent) increase in student transfer units earned, a 0.14 (20.8 percent) increase in the probability of persisting, an 0.09 (42.2 percent) increase in the probability of transferring to a four-year college, and an 0.08 (26.6 percent) increase in the probability of completion.

A natural follow-up question is what observable institutional differences, if any, might be driving these effects? A close treatment of what might account for these institutional differences in our setting is beyond the scope of this paper. However, prior work has identified several characteristics that may be associated with student success, including peer quality, faculty quality, class size or faculty-student ratio, and a variety of measures for college costs (Long 2008; Calcagno et al. 2008; Bailey et al. 2006; Jacoby 2006).

Finally, identifying institutional effects is not purely an academic exercise. In today’s policy environment, practitioners and higher education leaders are looking to identify the conditions and characteristics of postsecondary institutions that lead to student success. Given the recent push by policymakers to provide college scorecards, our analysis furthers that goal for a critical segment of higher education, public open access community colleges, and the diverse students they serve. Our results show that college rankings based on unadjusted mean differences can be quite misleading. After adjusting for student-level differences across campus, the average school rank in our sample changed by plus or minus thirty ranks. Our results suggest that policymakers wishing to rank schools based on quality [End Page 187] should adjust such rankings for differences across campuses in student-level inputs.

Michal Kurlaender

Michal Kurlaender is associate professor of education at the University of California, Davis.

Scott Carrell

Scott Carrell is associate professor of economics at the University of California, Davis.

Jacob Jackson

Jacob Jackson is research fellow at the Public Policy Institute of California.

Michal Kurlaender at mkurlaender@ucdavis.edu, University of California Davis, One Shields Ave., Davis, CA 95616
Scott Carrell at secarrell@ucdavis.edu, University of California Davis, One Shields Ave., Davis, CA 95616
Jacob Jackson at jackson@ppic.org, Senator Office Building, 1121 L Street, Suite 801, Sacramento, California 95814

We thank the California Community College Chancellor’s Office and the California Department of Education for their assistance with data access. Opinions reflect those of the authors and do not necessarily reflect those of the state agencies providing data.

REFERENCES

Alon, Sigal, and Marta Tienda. 2005. “Assessing the ‘Mismatch’ Hypothesis: Differentials in College Graduation Rates by Institutional Selectivity.” Sociology of Education 78(4): 294–315.
Altonji, Joseph G., Todd E. Elder, and Christopher R. Taber. 2005. “Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools,” Journal of Political Economy 113(1): 151–84.
Altonji, Joseph, and Richard Mansfield. 2014. “Group-Average Observables as Controls for Sorting on Unobservables When Estimating Group Treatment Effects: The Case of School and Neighborhood Effects.” NBER working paper no. 20781. Cambridge, Mass.: National Bureau of Economic Research.
Anderson, Nick. 2013. “Napolitano, University of California President, ‘Deeply Skeptical’ of Obama College Rating Plan.” Washington Post, December 6, 2013, Nick Anderson, The Washington Post. Accessed December 17, 2015. http://www.washingtonpost.com/local/education/napolitano-uc-president-deeply-skeptical-of-keyassumption-in-obama-college-rating-plan/2013/12/06/f4f505fa-5eb8-11e3-bc56-c6ca94801fac_story.html.
Anelli, Massimo. 2014. “Returns to Elite College Education: A Quasi-Experimental Analysis.” Job Market paper. Davis: University of California. Accessed December 16, 2015. http://www.econ.ku.dk/Kalender/seminarer/28012015/paper/MassimoAnelli_JobMarketPaper.pdf.
Bailey, Thomas, Juan Carlos Calcagno, Davis Jenkins, Timothy Leinbach, and Gregory Kienzl. 2006. “Is Student-Right-to-Know All You Should Know? An Analysis of Community College Graduation Rates.” Research in Higher Education 47(5): 491–519.
Black, Dan, and Jeffrey Smith. 2004. “How Robust Is the Evidence on the Effects of College Quality? Evidence from Matching.” Journal of Econometrics 121(1–2): 99–124.
———. 2006. “Estimating the Returns to College Quality with Multiple Proxies for Quality.” Journal of Labor Economics 24(3): 701–28.
Borman, Geoffrey, and Maritza Dowling. 2010. “Schools and Inequality: A Multilevel Analysis of Coleman’s Equality of Educational Opportunity Data.” Teachers College Record 112(5): 1201– 46.
Bowen, William G., Matthew M. Chingos, and Michael McPherson. 2009. Crossing the Finish Line. Princeton, N.J.: Princeton University Press.
Brand, Jennie E., and Charles N. Halaby. 2006. “Regression and Matching Estimates of the Effects of Elite College Attendance on Educational and Career Achievement.” Social Science Research 35(3): 749–70.
Brint, Steve, and Jerome Karabel. 1989. The Diverted Dream: Community Colleges and the Promise of Educational Opportunity in America, 1900–1985. New York: Oxford University Press.
Calcagno, Juan Carlos, Thomas Bailey, Davis Jenkins, Gregory Kienzl, and Timothy Leinbach. 2008. “Community College Student Success: What Institutional Characteristics Make a Difference?” Economics of Education Review 27(6): 632–45.
Carrell, Scott E., and James E. West. 2010. “Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors,” Journal of Political Economy 118(3): 409–32.
Clotfelter, Charles T., Helen F. Ladd, Clara G. Muschkin, and Jacob L. Vigdor. 2013. “Success in Community College: Do Institutions Differ?” Research in Higher Education 54(7): 805–24.
Coleman, James S. 1966. “Equality of Educational Opportunity.” Office of Education Pub no. 101-228-169. Washington: U.S. Department of Health, Education, and Welfare.
Cunha, Jesse M., and Trey Miller. 2014. “Measuring Value-Added in Higher Education: Possibilities and Limitations in the Use of Administrative Data.” Economics of Education Review 42(1): 64– 77.
Dale, Stacy B., and Alan B. Krueger. 2002. “Estimating the Payoff to Attending a More Selective College: An Application of Selection on Observables and Unobservables.” Quarterly Journal of Economics 117(4): 1491–527.
Dale, Stacy B., and Alan B. Krueger. 2012. “Estimating the Return to College Selectivity over the Career Using Administrative Earning Data.” NBER working paper no. 17159. Cambridge, Mass.: National Bureau of Economic Research.
Dimick, Justin, Douglas Staiger, and John Birkmeyer. 2010. “Ranking Hospitals on Surgical Mortality: [End Page 188] The Importance of Reliability Adjustment.” Health Services Research 45(6): 1614–29.
Dougherty, Kevin J. 1994. The Contradictory College: The Conflicting Origins, Impacts, and Futures of the Community College. Albany: State University of New York Press.
Fain, Paul. 2013. “Performance Funding Goes Federal.” Inside Higher Ed, August 23. Accessed December 17, 2013. https://www.insidehighered.com/news/2013/08/23/higher-education-leaders-respond-obamas-ambitous-ratings-system-plan.
Goldhaber, Dan, Stephanie Liddle, Roddy Theobald, and Joe Walch. 2010. “Teacher Effectiveness and the Achievement of Washington’s Students in Mathematics.” CEDR working paper 2010–06. Seattle: University of Washington.
Gross, Karen. 2013. “Ratings Are Not So Easy.” Inside Higher Ed, August 23. Accessed December 17, 2015. https://www.insidehighered.com/views/2013/08/23/obamas-ratings-system-may-be-difficult-pull-essay.
Grubb, W. Norton. 1991. “The Decline of Community College Transfer Rates: Evidence from National Longitudinal Surveys.” Journal of Higher Education 62(2): 194–222.
Hoekstra, Mark. 2009. “The Effect of Attending the Flagship State University on Earnings: A Discontinuity-Based Approach.” Review of Economics and Statistics 91(4): 717–24.
Hoxby, Caroline M. 2009. “The Changing Selectivity of American Colleges.” Journal of Economic Perspectives 23(4): 95–118.
Hussar, William J., and Tabitha M. Bailey. 2009. Projections of Education Statistics to 2018, 37th ed. NCES 2009–062. Washington: U.S. Department of Education.
Jacoby, Daniel. 2006. “Effects of Part-Time Faculty Employment on Community College Graduation Rates.” Journal of Higher Education 77(6): 1081– 103.
Kane, Thomas J., Jonah E. Rockoff, and Douglas O. Staiger. 2008. “What Does Certification Tell Us About Teacher Effectiveness? Evidence from New York City.” Economics of Education Review 27(6): 615–31.
Kane, Thomas J., and Douglas O. Staiger. 2008. “Estimating Teacher Impacts on Student Achievement: An Experimental Evaluation.” NBER working paper no. 14607. Cambridge, Mass.: National Bureau of Economic Research. Accessed February 24, 2016. http://www.nber.org/papers/w14607.
Long, Mark C. 2008. “College Quality and Early Adult Outcomes.” Economics of Education Review 27(5): 588–602.
Monks, James. 2000. “The Returns to Individual and College Characteristics: Evidence from the National Longitudinal Survey of Youth.” Economics of Education Review 19(3): 279–89.
Mullen, Ann L., Kimberly Goyette, and Joseph A. Soares. 2003. “Who Goes to Graduate School? Social and Academic Correlates of Educational Continuation After College.” Sociology of Education 76(2): 143–69.
Parker, Joseph P., Zhongmin Li, Cheryl L. Damberg, Beat Danielsen, and David M. Carlisle. 2006. “Administrative Versus Clinical Data for Coronary Artery Bypass Graft Surgery Report Cards: The View from California.” Medical Care 44(7): 687– 95.
Rabe-Hesketh, Sophia, and Anders Skrondal. 2008. Multilevel and Longitudinal Modeling Using Stata, 2nd ed. College Station, Tex.: Stata Press.
Rivkin, Steven G., Eric A. Hanushek, and John F. Kain. 2005. “Teachers, Schools and Academic Achievement.” Econometrica 73(2): 417–58.
Rosenbaum, James. 2001. Beyond College for All: Career Paths for the Forgotten Half. New York: Russell Sage Foundation.
Rumberger, Russell, and Gregory Palardy. 2005. “Does Segregation Still Matter? The Impact of Student Composition on Academic Achievement in High School.” Teachers College Record 107(9): 1999–2045.
Sengupta, Ria, and Christopher Jepsen. 2006. “California’s Community College Students.” California Counts: Population Trends and Profiles 8(2): 1–24.
Shulock, Nancy, and Colleen Moore. 2007. “Rules of the Game: How State Policy Creates Barriers to Degree Completion and Impedes Student Success in the California Community Colleges.” Sacramento, Calif.: Institute for Higher Education and Leadership.
Small, Mario L., and Christopher Winship. 2007. “Black Students’ Graduation from Elite Colleges: Institutional Characteristics and Between-Institution Differences.” Social Science Research 36(2007): 1257–75.
Stange, Kevin. 2012. “Ability Sorting and the Importance of College Quality to Student Achievement: [End Page 189] Evidence from Community Colleges.” Education Finance and Policy 7(1): 74–105.
Startz, Richard. 2012. “Policy Evaluation Versus Explanation of Outcomes in Education: That Is, Is It the Teachers? Is It the Parents?” Education Finance and Policy 7(3): 1–15.
State of California. 1960. A Master Plan for Higher Education in California: 1960–1975. Sacramento: California State Department of Education. Accessed December 17, 2015. http://www.ucop.edu/acadinit/mastplan/MasterPlan1960.pdf.
U.S. Department of Education. 2015. “College Scorecard.” Accessed December 17, 2015. https://collegescorecard.ed.gov.
The White House. 2010. “The White House Summit on Community Colleges: Summit Report.” Washington, D.C. Accessed December 17, 2015. https://www.whitehouse.gov/sites/default/files/uploads/community_college_summit_report.pdf.
———. 2013. “Fact Sheet on the President’s Plan to Make College More Affordable: A Better Bargain for the Middle Class.” Washington, D.C.: Office of the Press Secretary. Accessed December 17, 2015. https://www.whitehouse.gov/the-press-office/2013/08/22/fact-sheet-president-s-plan-make-college-more-affordable-better-bargain-. [End Page 190]

Footnotes

1. The master plan articulated the distinct functions of each of the state’s three public postsecondary segments. The University of California (UC) is designated as the state’s primary academic research institution and is reserved for the top one eighth of the State’s graduating high school class. The California State University (CSU) is primarily to serve the top one-third of California’s high school graduating class in undergraduate training, and graduate training through the master’s degree, focusing primarily on professional training such as teacher education. Finally, the California Community Colleges are to provide academic instruction for students through the first two years of undergraduate education (lower division), as well as provide vocational instruction, remedial instruction, English as a second language courses, adult noncredit instruction, community service courses, and workforce training services.

2. Between 2007 and 2018, the number of students enrolled in a college or university is expected to increase by 4 percent for whites but by 38 percent for Hispanics, 29 percent for Asian–Pacific Islanders, and 26 percent for African Americans (Hussar and Bailey 2009).

3. Our match rates may be the result of several considerations. First, the name match occurred on the first three letters of a student’s first name and last name, leading to many duplicates. Students may have entered different names or birthdays at the community college. Students may have omitted information at either system. Second, the denominator may also be too high; not all community college students attended California high schools. Finally, students who did attend a California high school, but did not take the eleventh grade standardized tests were not included in the high school data.

4. We excluded the three campuses that use the quarter system, as well as three adult education campuses. Summer students were allowed in the sample only if they took enough units in their first year to guarantee they also took units in the fall.

5. We include CST scaled scores, which are approximately normally distributed across the state.

6. Unlike the four-year college quality literature, we do not account for students’ college choice set since most community college students enroll in the school closest to where they attended high school. Using nationally representative data, Stange (2012) finds that in contrast to four-year college students, community college students do not appear to travel farther in search of higher quality campuses, and, importantly, “conditional on attending a school other than the closest one, there does not appear to be a relationship between student characteristics, school characteristics, and distance traveled among community college students” (2012, 81).

7. The Academic Performance Index (API) is a measure of California schools’ academic performance and growth. It is the chief component of California’s Public Schools Accountability Act, passed in 1999. API is composed of schools’ state standardized test scores and results on the California High School Exit Exam; scores range from a low of 200 to a high of 1,000.

8. We use a random effects model instead of fixed effects model due to the efficiency (minimum variance) of the random effects model. However, our findings are qualitatively similar when using a fixed effects framework.

9. Completion appears to be driven almost entirely by transfer; that is, few students who do not transfer appear to complete AA degrees, as such, these two outcomes are likely measuring close to the same thing.

10. Altonji and Mansfield (2014) show that, under reasonable assumptions, controlling for group means of individual-level characteristics “also controls for all of the across-group variation in the unobservable individual characteristics.” This procedure provides a lower bound of the school quality effects because school quality is likely an unobservable that drives individual selection.

11. Both hospital rankings and teacher quality rankings have been shown to be sensitive to controlling for individual characteristics (see, for example, Kane and Staiger 2008; Dimick, Staiger, and Birkmeyer 2010).

Share