publisher colophon
Abstract

This study evaluates the effect of Florida’s Bright Future Program on student college choices. We used regression discontinuity design to estimate the effect of two award levels, which had different SAT/ACT thresholds, on the probability of students choosing instate public colleges and four-year public colleges. The most consistent and robust finding was the positive, significant increases in the probability of attending Florida’s public colleges and in the probability of choosing four-year public colleges for those students who barely met the program eligibility criteria when compared with those who barely missed those criteria. That is, the evidence presented in this analysis points to the fact that the Bright Future programs significantly altered students’ college choices, both in terms of attending in-state public colleges and four-year public colleges. Although this finding held at different award levels and for students who took the SAT and/or ACT tests, the magnitude of the program effect varied along these factors.

Keywords

merit aid, college choice, regression discontinuity

Introduction

Since Georgia’s “Helping Outstanding Pupils Educationally” (HOPE) was implemented in 1993, more than a dozen states have adopted statewide merit-based financial aid programs, although the actual number of states varies depending on how merit-based programs are defined [End Page 115] (Doyle, 2006; Zhang & Ness, 2010). These programs have often been justified as strategies to broaden college access and retain the best and brightest students in state. Empirical evidence based on college enrollment data and U.S. Census suggests that these merit-aid programs have been largely successful in achieving these goals (Cornwell, Mustard, & Sridhar, 2006; Dynarski, 2000, 2008; Zhang, 2011). These programs have been especially attractive from an economic development perspective because they would boost college-educated labor force in merit-aid states (Dynarski, 2008).

In this study, we used student-level administrative data collected by the Florida Department of Education to investigate the impact of the Bright Futures Scholarship Program (hereinafter Bright Futures) on college choices among Florida’s high school graduates. In particular, we asked two related research questions. (1) Does Bright Futures program affect students’ decision to attend Florida’s public institutions? (2) Does the program affect students’ choices between Florida’s four-year vs. two-year public colleges?

This study contributes to the merit-aid literature in several important ways. First, studies using institutional-level enrollment data disguise college choices of individual students because prices faced by students and their subsequent college choices are not known due to enrollment aggregation. The individual-level administrative data used in this study track postsecondary enrollment for all high-school graduates in Florida and provide very detailed information on students’ demographic characteristics, family background, academic performance, and financial aid. These data allowed us to study the effect of financial aid on detailed college choice decisions. Second, most studies have used Georgia as the context because the HOPE Scholarship program is one of the earliest and probably the most well-known one in the nation; however, since eligibility criteria and award generosity vary across programs. The Bright Futures program features a tiered structure, which allowed us to examine the effect of financial incentives of different magnitudes on students with different levels of academic preparation. Finally, by exploiting specific eligibility criteria of the merit aid, we were able to use a regression discontinuity (RD) design, which arguably has better internal validity than conventional regression methods, to help remedy the inferential problems encountered when establishing causal effects using observational data.

Literature Review

This study was informed by two related strands of literature developed during the last several decades. These two lines have examined [End Page 116] the effect of financial incentives in general and the effect of merit-aid programs in particular on students’ college enrollment. The first line of research on the effect of financial incentives typically found a positive relationship between financial incentives and college participation (e.g., Desjardins, Ahlburg, & McCall, 2006; Kane, 2003; Kim, Desjardins, & McCall, 2009; Seftor & Turner, 2002; Van der Klaauw, 2002). These financial incentives included various federal, state, and institutional programs. For example, for federal financial aid program, Seftor and Turner (2002) considered the effect of the federal Pell grant program on the enrollment of adult students and found that students who lost eligibility for federal financial aid were about 4% less likely to enroll in college. Similarly, Bound and Turner (2002) estimated the enrollment effects of G.I. Bill aid to be around 4–10 percentage points. Various state programs (other than merit-aid programs) also led to higher college participation rates. For example, Kane (2003) estimated the effect of California’s CalGrant program on college-going and found that eligibility for the state grant led to a 3 to 4 percentage point increase in the proportion of admitted applicants enrolling in college. Abraham and Clark (2006) compared the number of D.C. residents attending public institutions in other states before and after the implementation of D.C. Tuition Assistance Grant program and found that college participation nearly doubled due to the program. Finally, financial incentives provided by institutions also attracted student enrollment. For example, Van der Klaauw (2002) examined the effect of institutional financial aid on college matriculation and estimated an elasticity of 0.86 with regard to institutional aid, making it a powerful tool in colleges’ attraction of students.

Not only do financial incentives affect students’ decision whether to attend college or not, they also influence their college choices. Among these important decisions are students’ choices between out-of-state and in-state institutions, between public and private institutions, and between two-year and four-year institutions. For high-achieving students these choices represent “high stakes” human capital investment (Avery & Hoxby, 2004). Linsenmeier et al.’s (2006) study focused on one major four-year university’s replacement of its financial aid package for low-income students with only grants. They determined that matriculation among low-income students did increase slightly, a statistically insignificant 3 percentage points, but the matriculation of low-income minority students increased more substantially, by 8–10 percentage points. Similarly, Abraham and Clark (2006) used the introduction of a tuition assistance grant program in the District of Columbia as an exogenous source of price variation to distinguish financial aid [End Page 117] effects on enrollment decisions. The program allowed D.C. residents to get “in-state” tuition rates at out-of-state public institutions. They found a college enrollment increase in freshmen-age students of 9 percentage points and increases in applications for four-year colleges and universities. They also determined that while applications to lower-tier four-year colleges increased, applications to top-tier institutions did not decrease significantly.

The second strand of research has examined effects of state merit-aid programs on college enrollment and choices and found positive and significant effects on college enrollment, especially at in-state four-year institutions. Using Current Population Survey data from 1989–1997 and difference-in-difference estimation, Dynarski (2000) reported that the Georgia’s HOPE scholarship increased Georgia’s net enrollment by 7–8%. Dynarski (2004) studied seven Southern states with merit-aid programs and found 5–7% increases in enrollment due to these programs. Using IPEDS data from 1988–1997 and comparing Georgia to other member states of the Southern Regional Educational Board (SREB), Cornwell, Mustard, and Sridhar (2006) found increases in total enrollment of about 6%.

This growth in freshman enrollment in merit-aid states was partly due to reduced out-migration to other states. For example, Dynarski (2004) found that students in the seven Southern states with merit-aid programs were less likely to attend out-of-state institutions that were near their borders. Orsuwan and Heck (2009) reported that students in merit-aid states were less likely to emigrate after merit programs were created; however, the exact policy effects depended on a variety of state-specific economic and political contexts. For example, the most substantial declines occurred in states where a high proportion of high school graduates attended out-of-state colleges and universities before the program adoption. Zhang and Ness (2010) confirmed that enrollment growth in merit-aid states was at least partly due to the fact that fewer students attended out-of-state institutions following the implementation of merit-aid programs. Further, merit-aid programs have varying effects on different types of colleges. Cornwell, Mustard, and Sridhar (2006) found that the effects are most pronounced at 4-year colleges and universities. Dynarski (2004) also reported a comparable change in choice of college, finding increases in enrollment in 4-year institutions but decreases in enrollment in 2-year institutions. Zhang and Ness (2010) reported that the greatest merit aid effects have occurred at in-state research and doctoral institutions. [End Page 118]

Florida’s Bright Futures Program

The state of Florida was one of the earliest states to adopt a state-sponsored merit aid program. In 1981, it established the Florida Undergraduate Scholars Fund (which later became the Florida Academic Scholars [FAS] Award), a merit-based financial aid program (Hu, Trengove, & Zhang, 2012). In 1991, Florida introduced its second statewide merit-based program when it initiated the Vocational Gold Seal Scholarship (which later became the Florida Gold Seal Vocational [GSV] Scholars award) specifically for vocational students. After observing the HOPE scholarship program in neighboring Georgia and voters’ discontent with the use of state lottery proceeds, the Florida legislature created and funded the Florida Bright Futures Scholarship Program in 1997. The previously existing two programs—FAS and GSV—were integrated into the newly created Bright Futures program. The major change in 1997 was the addition of the Florida Medallion Scholars Award (FMS) program, which broadened participation of students in merit aid programs in the state of Florida and significantly increased total funding to merit-based financial aid. It is noteworthy that because some parts of the Bright Futures program had existed before 1997, comparing student college choices before and after 1997 would yield the additional effects due to program changes in 1997, which would almost certainly underestimate the effect of the Bright Futures program as a whole. In contrast, the regression discontinuity approach may yield accurate estimates of the program effects by exploiting changes in the magnitude of financial aid and student college choice behaviors at some predetermined threshold points.

Florida’s Bright Futures Scholarship program, like the HOPE scholarship program in Georgia, has the trait of being simple and straightforward. As long as students have met the qualification of academic performance and file an application for the scholarship program, they are qualified for generous financial assistance for their college tuition and related expenses. The FAS awards cover 100% of tuition and some allowance for fees and college-related expenses while requiring a 3.5 GPA on 15 college preparatory credits in high school and a minimum SAT score of 1270 or a minimum ACT score of 28 for initial qualification. The FMS awards pay 75% of tuition and required fees while requiring 3.0 GPA on 15 college preparatory credits (e.g., English, Math, Science) and a minimum SAT of 970 or ACT of 20 for initial qualification (Hu et al., 2012). Bright Futures scholarships are available for Florida residents who are enrolled in an eligible Florida public [End Page 119] or private postsecondary institution. For private institutions, the award amount is determined by the tuition rates at equivalent public institutions. Part-time students are also eligible as long as they are enrolled for at least six semester credit hours. Students must apply for the scholarship by April 1 of the last semester before high school graduation. A student is eligible to accept an initial award for three years following high school graduation.

Data and Methods

Data and Variables

The Florida Education Data Warehouse (FEDW) includes all education-related data collected by the state from public colleges and universities (e.g., student course work, grades, degree program), public schools (e.g., student course work, grades, SAT or ACT scores), as well as student background information (e.g., gender, race/ethnicity, eligibility for participation in free or reduced lunch program, etc.). The data used for this analysis involved cohorts of students who were high school seniors in 2003–04, 2004–05, and 2005–06. For each cohort, we limited our analysis to those students who were between 17 and 20 years old at the time of high school graduation. This restriction, although not necessary from a research design point of view, removed some outliers where the age at high school graduation is very young or old. We also removed a few students who had attended colleges before their senior years in high school because we assumed that these records most likely contain errors in the enrollment data. Even if these records were correct, they were not eligible for aid because the Bright Future program was only available to high school graduates who attended colleges within three years after their high school graduation.

Data on each cohort contained student education progression in the education system from high school to college. Ideally, we wanted to know whether the Bright Futures program has any effects on a variety of college choices, including, for example, in-state vs. out-of-state, private vs. public, and four-year vs. two-year. Because FEDW data only include information on students who are in the public school and college systems, when a student does not show up in the college enrollment files, we do not know whether she does not attend college, attends a private institution in Florida, or attends an out-of-state institution. Due to this data limitation, we created two variables that represent student college attendance and choices as our main dependent variables. The first is whether a student attends a public institution in Florida. Given that the Bright Futures provides financial assistance to students who attend [End Page 120] either public or private institutions in Florida, one would expect that students were more likely to attend colleges in-state than out-of-state. Further, since the award amount for those students who attend Florida’s private institutions is determined by the tuition rates at equivalent public institutions, one would expect that the program incentivizes students to attend Florida’s public institutions more than private ones. Recent studies using aggregate college enrollment data supported these predictions (Zhang, Hu, & Sensenig, 2013). We hypothesized that this also held true at the individual level, i.e., students who were eligible for the merit-aid program were more likely to attend Florida’s public colleges than those not eligible for the award. The second college choice variable indicates whether a student attends a four-year or two-year institution when she attends public colleges in Florida. Given that the award amount is proportional to tuition and fees (e.g., FMS covers 75% and FAS covers 100% of tuition and fees), it was expected that students who were eligible for the merit-aid program were more likely to attend four-year institutions than those not eligible for the award.

Data on student test scores (SAT and ACT) and high school GPA were also requested from FEDW, together with a variety of student and family background characteristics, including gender, age, race/ethnicity, and eligibility for free or reduced lunch. FEDW provides excellent data on SAT and ACT scores from various sources, including college application and admission files, financial aid application files, and ultimately from College Board and ACT Inc. Data on high school GPA, however, presented a couple of problems. First, data on high school GPA are only provided for those students who applied for four-year public colleges in Florida. This created an obvious selection issue, i.e., students with GPA data are more likely to be eligible for the award and attend four-year colleges. Second, when this data element is available, it usually reports the overall high school GPA, while the program eligibility is based on average GPAs for a set of main courses, including 4 in English (3 if with substantial writing), 4 in Math (Algebra I and above), 3 in Natural Sciences (2 if with substantial lab), 3 in Social Sciences, and 2 in Foreign Language courses. As a result, even for the subset of students whose high school GPA was available, it did not serve as a good criterion for program eligibility. For these reasons, we chose not to use GPA in this analysis.

Given the limitation of GPA data, it was important to assess the overall misclassification based on SAT/ACT scores. Misclassifications could occur due to a variety of reasons. The first is noncompliance due to student choice/decision. Students who are eligible for the award may end up not receiving the award for various reasons: it may be that students [End Page 121] did not know about the program, did not submit applications on time, or simply were unable to submit due to system/human errors. Unfortunately, we do not have data to verify the magnitude of this possibility alone. Another scenario is the misclassification at the researcher’s end. For example, we did not use GPA as one of the eligibility rules due to problems with this variable. As a result, those students who qualified for the award based on SAT or ACT scores in our analysis may have been ineligible for the award in reality. Previous research showed that SAT/ACT scores tend to be more rigorous criteria than the GPA criterion (Scott-Clayton, 2011), so we expected a small misclassification. For example, we tabulated those students with both high school GPA and SAT scores and found that approximately 12% of students who are eligible for FMS based on SAT threshold have their overall high school GPA lower than 3.0 and 10.4% of students who are eligible for FAS based on SAT threshold have their overall high school GPA lower than 3.5. These numbers suggest a misclassification problem due to the lack of accurate high school GPA data.

Considering that students’ overall GPAs could be quite different from the GPAs used to determine aid eligibility, we further evaluated the overall severity of the misclassification problem by requesting data on Bright Futures award disbursement from Florida’s Department of Education. These data, however, did not allow us to fully assess the misclassification problem for two reasons. First, information on actual award disbursement was only available for those who attend Florida’s public institutions. In other words, if a student attends an out-of-state institution or a private college in Florida, her eligibility is not available from this file. Second, the financial aid office only retains a partial list of award disbursement. In other words, even a student attends a public college in Florida and has received Bright Future awards, there is not guarantee that she appears in the award disbursement file. Among the 2003–04 high school senior cohort (approximately 83 thousand with either SAT or ACT scores), a total of 22,240 students are included in the award disbursement file; while among the 2004–05 cohort, only a total of 658 students are included in the award file, and this number is even lower at 302 among the 2005–06 cohort.

To make the best use of this available information, we examined the accuracy of our classification for those students who were included in the award file. Although our data do not allow us to examine whether a student is eligible for the award when she is not in the award file, a student must have received the award if she appears in the award file. Consequently, assessing the misclassification for this group of awardees provided partial evidence on the severity of misclassification in our [End Page 122] analysis. Due to incomplete award data, many students who were classified as eligible for the award do not appear in the award file. These are the students who could have attended out-of-state institutions, attended private institutions in Florida, were not eligible for the award, or were eligible for the award but not included in the award file. However, for all three cohorts of high school seniors, the misclassification for those who are included in the award file was minimal. For example, for the 2003–04 cohort, among 22,240 Bright Futures awardees in the award file, the classification used in this analysis (i.e., based on SAT/ACT scores) was able to correctly classify 22,058 students, resulting in a 0.82% misclassification rate. Our further analysis based on those whose SAT scores were between 910 and 1020 (i.e., our primary bandwidth used in this analysis) suggested that misclassification is more likely to happen around the cutoff thresholds. However, even for those students within this very narrow range, the misclassification is about 3.1%. The misclassification rate is zero for all 658 awardees that we have award information for the 2004–05 cohort. For the last 2005–06 cohort, the misclassification rate was 0.33% for the entire sample of awardees, and was 2.3% for those whose SAT scores were between 910 and 1020. These numbers suggest that misclassification might not be a major problem in our analysis.

Methods

In this study, we used a regression discontinuity approach to estimate the causal effects of Bright Futures on college choices. A substantial body of literature has been developed that uses RD to examine postsecondary access and educational attainment (Bettinger, 2004; DesJardins & McCall, 2014; McCall & Bielby, 2012; Scott-Clayton, 2011; Trochim, 1984; Van der Klaauw, 2002). RD is a useful technique for situations in which there are specific, measurable criteria for eligibility into a program. In our study, students were assigned to the treatment (e.g., merit-aid recipients) and control groups (e.g., nonrecipients) based on a set of prespecified criteria. It is worthwhile to note that the RD design assumes that the students whose eligibility scores were close to the cutoff threshold are very similar, akin to being randomly assigned around this threshold. In a “sharp” design, all subjects at or above the cutoff threshold receive the treatment and those below the threshold are controls. In a “fuzzy” design, the probability of receiving treatment jumps at the cutoff point. (See Cook & Campbell, 1979 and Van der Klaauw, 2002 for more detailed discussion.)

Formally, the RD method estimates a local linear regression (Imbens & Lemieux, 2008) around a specific cutoff point () as follows: [End Page 123]

inline graphic

where yi is our dependent variables. 1{} is an indicator function, taking the value one if the logical condition in brackets holds and the value zero if not. In this case, it takes the value of one if a student score is above the cutoff point. f(S) is an unknown smooth function of student score, which may be approximated by linear, quadratic, or cubic terms of scores, depending on the actual data and significance test. The parameter β gives the difference in outcomes at the cutoff point, i.e., the effect of the scholarship program on student outcomes.

Since the identification of RD technique is based on the difference between the groups above and below the threshold, choosing an appropriate bandwidth for RD design becomes critical because it involves balancing between precision and bias. Using a larger bandwidth yields more observations and thus more precise estimates. However, the local linear estimates become less accurate with a large bandwidth, leading to biased estimates. Due to the discrete nature of SAT and ACT scores, we did not use the optimal bandwidth as suggested by Fan and Gijbels (1996). Instead, we used the measurement error of test scores as guidance when choosing bandwidths. Because of random noise in the test instrument, it is possible that students with test scores above the cutoff point could have been in fact not qualified; similarly, students who were qualified could have scores lower than the cutoff point. Measurement errors made it possible to have a group of students with different test scores but similar qualifications. Apparently, choosing a bandwidth that is too wide relative to the measure error of test scores would likely yield two different groups above and below the threshold. The standard error of measurement for both verbal and math SAT scores is approximately 30 (The College Board, 2011), resulting in a standard error of measurement of 42 for combined verbal and math SAT scores. A bandwidth of ±60 would likely yield two groups of students with substantial overlaps in their true academic qualifications. We also used other bandwidths in the neighborhood of 60 (i.e., 40, 50, 70, and 80) in our data analysis, and results are robust to these different bandwidths.

Results

This section reports results from the analysis that used SAT/ACT as the criterion of award eligibility. We begin this section by discussing the SAT/ACT eligibility rule. This analysis led us to conduct separate analyses for groups of students based on availability of SAT and ACT scores. [End Page 124]

SAT and ACT as Eligibility Rules

FEDW collects data on student SAT and ACT scores from various sources including college application and admission files, financial aid application files, and finally College Board and ACT Inc. test records. Multiple records of SAT (or ACT) tests for each student are simplified into one by retaining the highest scores of SAT (or ACT). The data from College Board and ACT Inc. guarantee that we have the entire recent history of student test taking. This was critical for this analysis because students can qualify for the award via either SAT or ACT scores. Having two eligibility rules, however, introduces complications into the regression discontinuity design. Converting ACT scores into SAT is attractive from a design point-of-view because using SAT scores as the eligibility rule provides groups of students with smaller differences than the ACT scores. For example, the difference between adjacent groups of students based on SAT (e.g., between 960 and 970, which is the SAT cutoff point for FMS) is much smaller than that based on ACT (e.g., between 19 and 20, which is the ACT cutoff point for FMS). Because the regression discontinuity methods are based on the assumption that students around the cutoff point are similar, SAT is a much superior criterion than ACT in the regression discontinuity design.

Therefore, we began by examining the distribution of students by SAT and ACT scores and the concordance between the two. Looking at the distribution of students for the entire sample was not appropriate because students who took SAT and ACT tests could be different (e.g., SAT test takers may be more likely to attend out-of-state and/or private colleges). Our sample included a group of students who had both SAT and ACT scores. Table 1 reports the distribution of this particular group of students over SAT (the left two columns) and ACT (the right column) scores. For example, 40.60% of students in this group had SAT scores lower than or equal to 960 (i.e., not eligible for FMS under the SAT rule), whereas only 33.02% students in the same group had ACT scores lower than or equal to 19 (i.e., not eligible for FMS under the ACT rule). Similarly, 10.08% (i.e., 1–89.92%) students in this group had SAT scores equal to or higher than 1270 (i.e., eligible for FAS under the SAT rule), while 13.67% (i.e., 1–86.33%) students in the same group had ACT scores equal to or higher than 28 (i.e., eligible for FAS under the ACT rule). These results suggest that it is (much) easier to qualify for the merit-aid award based on the ACT rule than the SAT rule.

The concordance table provides clues into this observation. A widely used concordance table is Dorans (1999). A similar concordance table provided by ACT Inc. is available at http://www.act.org/solutions/college-career-readiness/compare-act-sat/. According to the concordance [End Page 125] table in Dorans (1999), an ACT score of 19 is equivalent to SAT scores between 900 and 930, while an ACT score of 20 is equivalent to SAT scores between 940 and 970. Stated in a slightly different way, although an SAT score corresponds to an ACT score of 20, it is more difficult to obtain 970 on the SAT than 20 on the ACT. As a result, the eligibility rule used by the Bright Future programs actually favors those students who use ACT scores.

Table 1. Distribution of Students with Both SAT and ACT Scores
Click for larger view
View full resolution
Table 1.

Distribution of Students with Both SAT and ACT Scores

One immediate corollary of this exercise is that using the concordance between SAT and ACT scores will be futile because, for example, those students with SAT scores between 940 and 970 will likely have very different college choice behaviors either among themselves or when compared with those students with an ACT score of 20. In [End Page 126] addition, it is likely that students who take SAT tests are different than those who take ACT tests. Given these considerations, we divided all students (a total of 490,757 in three high school senior cohorts) into four mutually exclusive groups: (1) students with only SAT scores, a total of 111,495 or 22.7% of all high school seniors, (2) students with only ACT scores, a total of 43,048 or 8.8% of all high school seniors, (3) students with both SAT and ACT scores, a total of 109,729 or 22.4% of all high school seniors, and (4) students with no SAT or ACT scores, a total of 226,484 or 46.1% of all high school seniors. This last group typically includes those who do not aspire to attend colleges. The next three sections report results based on the first three groups, respectively.

Results Based on Students with Only SAT Scores

SAT scores serve as the sole eligibility criterion for this group of students. Figure 1 tabulates the proportion of students who attended Florida’s public colleges (Panel A), and the proportion of students who attended four-year public colleges as opposed to two-year public colleges among those who attended public colleges in Florida (Panel B) over the range of SAT scores between 770 and 1470. This range was selected because it encompassed the two important threshold points for FMS (i.e., 970) and FAS (i.e., 1270) in this analysis and also because of small numbers of students with either very low or very high SAT scores. Several observations can be made based on this tabulation. First, Panel A suggests that the proportion of FL’s high school seniors who attended in-state public colleges was a concave function of SAT scores. This makes sense because students on the left tail with low SAT scores were less likely to attend college, and students on the right tail with high SAT scores were more likely to have other options such as private and/or out-of-state institutions. Second, there are discontinuities at the cutoff point of 970 and 1270, although the jump at 970 is not as obvious as that at 1270, suggesting that award eligibility at these two discrete points provided incentives for FL’s high school seniors to attend public colleges in their home state. Third, Panel B suggests that the proportion of students who attended four-year colleges in Florida was positively related to their SAT scores. Taken together with observations from Panel A, it indicates that students with low SAT scores were not only less likely to attend in-state public colleges, but also were much less likely to attend four-year institutions when they did attend colleges. The high proportion on the upper tail suggests that while students with high SAT scores were less likely to attend public colleges in Florida, when they do, they almost surely attended four-year institutions. Finally, there are two rather dramatic jumps in the proportion of students who attend [End Page 127]

Figure 1. Probability of College Enrollment for Students with Only SAT Scores
Click for larger view
View full resolution
Figure 1.

Probability of College Enrollment for Students with Only SAT Scores

[End Page 128]

four-year institutions at the cutoff points of 970 and 1270, suggesting that award eligibility had a considerable impact on student college choices in terms of whether to attend four-year vs. two-year institutions.

Figure 2. Regression Discontinuity Estimates for Students with Only SAT Scores
Click for larger view
View full resolution
Figure 2.

Regression Discontinuity Estimates for Students with Only SAT Scores

Figure 2 provides graphic analysis on the potential effect of award eligibility on student college choices based on linear regressions with quadratic terms to allow nonlinear relationships between college choice and test scores around the eligibility points of 970 and 1270 with a bandwidth of ±60. Considering the discrete nature of the eligibility points, we set the threshold midpoint at 965 and 1265, respectively. (Using either 960 or 970 as the threshold points generates similar results.) Results in Figure 2 confirm our observations in Figure 1. Panel A indicates that there was a small (approximately 2.5 percentage points) increase in the proportion of students attending Florida’s public colleges for those students who barely met the FMS eligibility criteria when compared with those who were just below the threshold. Panel B suggests a rather dramatic (approximately 6 percentage points) increase for those who barely met the FAS eligibility criteria when compared with those who were slightly below the 1270 threshold. Both Panels C and D [End Page 129] suggest that students who were slightly above the FMS and FAS threshold points were (much) more likely to attend four-year rather than two-year colleges when compared with those students who just missed the FMS or FAS thresholds.

Table 2 reports results from a series of statistical models that estimate the effect of FMS and FAS eligibility on the two college choice variables. A variety of specifications are estimated, including linear, quadratic, and cubic terms of the SAT scores (e.g., the distance between the actual SAT scores and the threshold points). These terms are interacted with the dummy variable that indicates whether a score is above or below the threshold to allow for different slopes below and above the threshold. We also estimated these models with and without control variables (e.g., age, gender, race/ethnicity, free or reduced lunch, and cohort). One critical assumption of the regression discontinuity design is that the observed discontinuity only comes from the eligibility rule, but not from other variables that may change at the discontinuity point. Graphically speaking, there should be no jumps in those covariates at the cutoff point. Statistically, this can be tested by conducting the regression discontinuity analysis treating these covariates as outcome variables (Van der Klaauw, 2008). Results of this exercise, provided in Appendix Table A, suggest that there are no significant differences in most covariates between those who are barely eligible and those who just miss the threshold. There are only two visible exceptions. The proportion of Asian students is higher above than below the 970 cutoff points; and the proportion of female students is higher above than below the 1270 cutoff points. Based on these results, we did not expect significant changes in the estimates of program effects with or without those covariates. Since we ran a series of regression analysis in this article, we conducted balance checks on independent variables each time when our sample changed. Results of these analyses generally suggested no significant differences in our covariates at the cutoff points. These results are available upon request.

Table 2 reports a number of representative specifications, with attending Florida’s public colleges as the outcome variable. (Full results with combinations of functional forms, covariates, bandwidth, and cohorts are available upon request.) Column (1) reports linear specification without adding covariates based on bandwidth of ±40, ±60, and ±80 around the FMS and FAS eligibility threshold. When covariates are added to the linear specification in Column (2), there are essentially no changes in the estimates of the program effect, which is consistent with our conclusion that these are no significant imbalances in those covariates at the cutoff points. In Column (3), we relaxed the linear [End Page 130]

Table 2. College Choice for Students with Only SAT Scores
Click for larger view
View full resolution
Table 2.

College Choice for Students with Only SAT Scores

assumption by adding quadratic terms of SAT scores. These models generated slightly different results than the linear specification. Since adding more polynomial terms generates essentially similar results with high-order polynomials terms being statistically insignificant, we used the quadratic model (i.e., Column 3) with covariates as our preferred model. Results in Column (3) are largely consistent with the observation in Figures 1 and 2. There was a small but insignificant increase in the probability of attending Florida’s public colleges due to FMS eligibility. The increase, however, was much larger at the FAS threshold point. On [End Page 131] average, those who barely met the FAS criterion were approximately 6 percentage points more likely to attend Florida’s public colleges than those who barely missed the FAS criterion. Considering the small difference in award magnitude between FMS and FAS, this additional large increase is quite impressive.

The next three columns use four-year public college enrollment as the dependent variable. Similarly, different model specifications generate very similar results. Results in our preferred model, i.e., Column (6), indicate that FMS eligibility had a positive and significant effect (in the order of 10 percentage points) on the probability of attending four-year public colleges. With a baseline probability of merely 0.2 (i.e., Figure 1 Panel B indicates that approximately 20% of students who were slightly between the 970 threshold ended up attending four-year colleges), this 10 percentage points increase represents an approximately 50% increase in the probability of attending four-year vs. two-year colleges. Results in Column (6) further indicate that meeting the FAS criterion added approximately 7 percentage points to the probability of attending four-year vs. two-year colleges.

Results Based on Students with Only ACT Scores

Figure 3 shows the effect of award eligibility on student college choices based on local linear regressions with quadratic terms around the cutoff points of ACT 20 and 28 points. We used ACT scores of 19.5 and 27.5 points as the discontinuity points. Panel A indicates that there was a quite large (approximately 10 percentage points) increase in the proportion of students attending Florida’s public colleges for those students who barely met the FMS eligibility criteria when compared with those who were slightly below the threshold. Panel B suggests a rather small (approximately 2 percentage points) increase for those who barely met the FAS eligibility criteria when compared with those who were slightly below the threshold at 28 points. Panel C indicates that students who were slightly above the FMS and FAS thresholds are more likely to attend four-year than two-year colleges when compared with those students who just missed the FMS or FAS threshold, although the increase was much larger at the FAS cutoff point.

Table 3 provides statistical estimates of the effect of FMS and FAS eligibility on our college choice outcomes. Our preferred model specification in Columns (3) and (6) suggests that at the FMS cutoff point (i.e., 20 ACT), there was a large and significant increase (approximately 10 percentage points) in the proportion of students attending Florida’s public colleges. The increase in the proportion of students who attended [End Page 132] four-year colleges were also positive and significant (about 3–7 percentage points) in most specifications. At the FAS cutoff points (i.e., 28 ACT), the effect on the probability of attending public colleges was positive but estimates vary by model specifications, while the effect on the probability of attending four-year colleges was large and significant, about 10 percentage points.

Table 3. College Choice for Students with Only ACT Scores
Click for larger view
View full resolution
Table 3.

College Choice for Students with Only ACT Scores

Results Based on Students with Both SAT and ACT Scores

For the group of students with both SAT and ACT scores, eligibility can be achieved by reaching either the SAT or ACT threshold, which poses difficulties in determining the distance between students’ test [End Page 133] scores and the cutoff point when both scores are considered. To resolve this issue, we examined: (1) the effect of the SAT cutoff points among those students who did not meet the ACT threshold, and (2) the effect of the ACT cutoff points among those students who did not meet the SAT threshold. We replicated the analysis in sections 5.2 and 5.3 for subsamples of all students with both SAT and ACT. It is important to note that for this group of students, when one cutoff point is used, the other test becomes an additional covariate. Balance checks on these tests did not reveal significant differences in these additional covariates at the cutoff points.

Figure 3. Regression Discontinuity Estimates for Students with Only ACT Scores
Click for larger view
View full resolution
Figure 3.

Regression Discontinuity Estimates for Students with Only ACT Scores

Table 4 contains estimates for those who do not qualify for the award based on ACT scores. (Graphical analyses are not reported here due to space limit, but are available upon request) For those students whose ACT score was below 20 points, there was a positive and significant increase in the proportion of students attending Florida’s public colleges (about 4–6 percentage points depending on model specifications) and in the proportion of students attending four-year colleges (about 6–10 percentage points depending on model specifications) at the SAT threshold for FMS awards. For those students whose ACT score was below [End Page 134] 28 points, there was a positive and significant increase in the proportion of students attending Florida’s public colleges (about 4–7 percentage points depending on model specifications); however, the change in the proportion of students attending four-year colleges at the FAS threshold is not significant.

Table 4. College Choice for Students with SAT and ACT scores, Based on SAT Eligibility
Click for larger view
View full resolution
Table 4.

College Choice for Students with SAT and ACT scores, Based on SAT Eligibility

Table 5 contains estimates for those who did not qualify for the award based on SAT scores. For those students whose SAT score was below 970, there was a small and significant increase in the [End Page 135] proportion of students attending Florida’s public colleges (about 2 percentage points) and in the proportion of students attending four-year colleges (about 4–10 percentage points depending on model specifications) at the ACT threshold for FMS awards. However, for those students whose SAT was below 1270 points, there was no statistically significant change in two college choice outcomes at the ACT threshold points.

Table 5. College Choice for Students with SAT and ACT Scores, Based on ACT Eligibility
Click for larger view
View full resolution
Table 5.

College Choice for Students with SAT and ACT Scores, Based on ACT Eligibility

[End Page 136]

Retaking Tests

Because SAT is a more desirable criterion than ACT in the regression discontinuity design, our analysis in this section focuses on the first group of students who took SAT tests only. Since the award eligibility criteria are known a priori, students may respond to these criteria by retaking (or not) SAT tests, thus creating imbalance in unobserved student characteristics. When these unobserved characteristics are related to our outcome variables, they create bias in our estimated program effects. To be more specific, if retaking college entrance tests is indicative of students’ decision to attend in-state colleges especially four-year colleges, then ignoring these unobserved characteristics may overestimate the program effects.

A simple yet effective way to evaluate whether student sorting has occurred is to visually examine the distribution of students around our cutoff points. Because students who did not reach the SAT threshold in their first attempts may switch to take ACT in their second attempts and vice versa, our strategy in this analysis is to focus on those students who only took SAT tests so that the distributions of test scores across different attempts can be compared. For this group of students, we know how many times each student took SAT test and her test score for each attempt. Among all high school seniors who only took SAT tests, slightly half of them (52.5%) took the test only once, another one third (33.9%) took the test twice, 11.0% three times, 2.1% four times, and about 0.5% five or more times. Figure 4 presents the distribution of students by SAT scores for those who only took SAT tests. The lighter line represents the distribution students by SAT scores in their first attempt. Not surprisingly, there are no visible jumps at either SAT 970 or 1270 points. A formal McCrary (2008) test fails to reject the null hypothesis that there is no jump in the distribution at these two points. The thicker line represents the distribution students by their highest SAT scores after all attempts. As expected, there are two visible jumps at SAT 970 and 1270 points. Apparently, many students who are below those threshold points have taken the test two or more times and many of them have successfully moved above those thresholds. A McCrary (2008) test rejects the hypothesis that there is no jump at SAT 970 (p < 0.01) and at 1270 (p < 0.05). Some descriptive statistics would be informative here. For example, among those students who took only SAT tests in our sample discussed here, a total number of 12,093 students scored between 910 and 960 (i.e., within the 60 points bandwidth below 970) on their initial try. Among this group, 6298 (or 52%) chose to take the test a second time. Slightly over half (51.2%) of this retaking group were able to score equal or above 970 on their second try. Similarly, some students [End Page 137] went on and took the test again. As a result, 41.8% of the initial group of 12,093 were able to score equal or above 970.

Figure 4. Distribution of Students by SAT Scores
Click for larger view
View full resolution
Figure 4.

Distribution of Students by SAT Scores

The apparent jump in the density of observations at cutoff points calls into question on the unbiasedness of RD estimates in our analysis. Fortunately, the rich data we have on student test taking allow us to gain a better understanding of whether and how the program effects may be related to student test-taking behaviors. Consequently, we extend our analysis by adopting two approaches. In the first approach, because students make decisions about whether to retake SAT tests presumably based on their prior test scores, it is possible to use prior test scores to design a fuzzy RD approach. In a fuzzy RD design, the local average treatment effect is estimated by using the discontinuity in the probability of treatment at the cutoff point. In this analysis, we estimate a first-stage equation where the first test scores are used as an instrument variable to predict the eligibility based on final scores, and a second-stage equation similar to equation (1) with fitted probabilities from the first stage estimation. It is noteworthy that because aid eligibility (i.e., the outcome variable) in the first-stage equation is based on final SAT scores but not on actual award, this approach does not resolve the mis-classification problem due to unavailability of GPA and award data. [End Page 138] Table 6 replicates our preferred specification in Column (3) and (6) in Table 2 by using a fuzzy RD design and yields similar results. That is, FMS has a small effect on the probability of attending Florida’s public colleges but a large and significant effect on the probability of attending four-year vs. two-year institutions. While Table 2 indicates a positive effect of FAS on the probability of attending Florida’s public institutions, this effect became very small using instrumental variable estimation. The effect of FAS on the probability of attending four-year vs. two-year colleges remained positive and significant.

Table 6. Effect of Bright Futures Program, Instrumental Variable Estimates
Click for larger view
View full resolution
Table 6.

Effect of Bright Futures Program, Instrumental Variable Estimates

In the second approach, we estimate the program effect for students with the same number of test attempts, assuming that the number of test attempts indicates certain unobserved characteristics. Since the vast majority of students took the SAT less than 4 times, we estimate separate RD models for those students who took the SAT once, twice, and three or more times. Comparing this effect across students with different number of attempt is likely to generate insights into how program effects are affected by their unobserved characteristics. This approach, however, does not resolve this problem of imbalanced sample sizes at cutoff points, as indicated by separate McCrary tests. In addition, analyses by subgroups of students may limit the external validity of these findings. Results of this exercise are reported in Table 7. Models presented in this table are our preferred model, i.e., with quadratic terms and all control variables, as in Column 3 and 6 in Table 2, but estimated for different number of test attempts. Comparing the estimated effects in Table 2 and those in Table 7 provides additional insights into how the effects are related to test attempts. Results indicate that for those students who were around the FMS eligibility threshold (i.e., 970), the estimated effects on attending Florida’s public colleges were small and insignificant regardless of the number of test attempts; however the [End Page 139] effects on attending four-year vs. two-year institutions are the largest for those who took the SAT three times or more. For example, while Table 2 suggests that overall there is a significant increase (in the order of 10 percentage points) in the probability of attending four-year institutions for all students, the estimated effects are about 8 percentage points for those who took the SAT once or twice and these effects increase to about 15 percentage points for those with three or more attempts. These results make perfect sense because those who took the SAT multiple times are most likely the students who wanted to attend four-year colleges.

Table 7. Effect of Bright Futures Program by Number of Test Attempts
Click for larger view
View full resolution
Table 7.

Effect of Bright Futures Program by Number of Test Attempts

[End Page 140]

Results related to the effect of FAS also yield some interesting findings. While our results in Table 2 show that overall those who barely met the FAS criterion were approximately 6 percentage points more likely to attend Florida’s public colleges than those who barely missed the FAS criterion, the positive effects only occurred to those who took the SAT more than once, especially to those who took the SAT twice. In contrast, while results in Table 2 suggest that meeting the FAS criterion added approximately 7 percentage points to the probability of attending four-year vs. two-year colleges, this improved probability is mainly due to those who took the SAT once. These results suggest that while FAS has significant effects on the two college choice variables in this study, its effects can be moderated by some unobserved student characteristics.

Jumps in the distribution of students around cutoff points attest student sorting due to preannounced eligibility criteria. Because student make decisions about how many times they take college entrance exams, they have no control over test scores. Variations in test scores, which lead to different award outcomes, allow us to examine the program effect based on students with similar exhibited test-retaking behaviors. This further analysis by the number of test attempts suggests that although program effects vary, i.e., it matters more to some students than to others, the overall effect of the Bright Futures program is not an artifact of student sorting. In fact, the average program effect in Table 7 weighted by the proportion of students in each group is very similar to the overall program effect in Table 2.

Summary and Discussions

This study evaluates the effect of Florida’s Bright Future Program on student college choices. In general, there were significant increases in the probability of attending Florida’s public colleges and in the probability of choosing four-year public colleges for those students who barely met program eligibility criteria when compared with those who barely missed those criteria. The evidence presented in this analysis points to the fact that the Bright Future programs significantly altered students’ college choices, both in terms of attending in-state public colleges and four-year public colleges. Although this finding held at both FMS and FAS cutoff points and for students who take SAT and/or ACT tests, the magnitude of the program effect varies along these factors.

For those students with only SAT scores, the FMS award did not seem to have a large impact on students’ probability of choosing instate public colleges, while the FAS award increased this probability by approximately 6 percentage points. Both FMS and FAS awards, [End Page 141] however, significantly increased the probability of attending four-year institutions as opposed to two-year institutions. For those students with only ACT scores, the FMS award had large effects on the probability of choosing in-state public colleges; the effect of the FAS award—while positive—varied by model specifications. Both FMS and FAS award had positive and significant effects on the probability of choosing four-year over two-year institutions, although the FAS award appears to have a larger impact. Finally, for those students with both SAT and ACT scores, qualifying for FMS on one test but not the other seems to have had positive and significant effects on both college choice outcomes; however, qualifying for FAS on one test but not the other in general yielded nonsignificant results. These varying effects of award eligibility among different groups of students suggest that test-taking could be endogenous to college decisions. For example, students who took both tests were more likely to have specific college plans, and were thus less likely to be affected by financial aid.

Results from this analysis have several important policy implications. The first relates to the use of SAT and ACT scores as eligibility criteria for merit-based financial aid. The distribution of SAT and ACT scores for those students who took both tests suggests that the eligibility rule for the Bright Future program in general favors ACT test takers. The much smaller effect of the FMS award on the probability of attending in-state public colleges based on SAT threshold than that based on ACT threshold is a clear indication that students who are close to but above 970 SAT have better college options outside of Florida’s public colleges than those who are close to but above 20 ACT. Subsequent changes in Bright Futures eligibility criteria have addressed this problem. For example, the eligibility rule was changed to 980 SAT or 21 ACT in the 2011–12 academic year, and further to 1020 SAT or 22 ACT in 2012–13, and finally to 1170 SAT or 26 ACT starting in the 2013–14 academic year. These rules seem to provide an even opportunity for SAT and ACT test takers. For example, based on the 980 SAT or 21 ACT rule, 42.81% of the students in Table 1 are not eligible by SAT score and 42.09% are not eligible by ACT score. These proportions are 51.27% and 51.08% based on the 1020 SAT or 22 ACT rule, and 77.74% and 77.72% based on the 1170 SAT or 26 ACT rule. Although these more recent rules have dramatically reduced the coverage of the Bright Future program, they seem to provide a more even opportunity for SAT and ACT test takers. The analysis in this study, however, does not consider the subtle differences between these two tests. It is well known that the SAT was designed as an aptitude test whereas the ACT was designed as an achievement test. If the distinction between aptitude [End Page 142] and achievement is real, policymakers may be able to design eligibility rules in a way that encourages students to work harder. For example, an easier ACT eligibility rule (such as SAT 970 and ACT 20) would encourage some students to study hard to achieve award eligibility by the ACT rule. Since most state merit-aid programs have both SAT and ACT eligibility criteria, it is important to consider fairness and other policy ramifications when setting SAT and ACT thresholds.

Second, the Bright Future program has a tiered structure, with the FAS awards covering 100% of tuition and required fees while the FMS awards cover 75% of the total tuition bill. Given the relatively small difference between the two levels (i.e., 25% of total tuition bill), one would expect only marginal, positive effects of the FAS award when comparing those who barely meet the FAS eligibility rule and those who barely miss the FAS but are well above the FMS eligibility criteria. Results in this analysis, however, indicate that the effect of this additional 25% award is substantial suggesting that the effect of financial aid may not be proportional to the magnitude of aid. It appears that “free college education” has a special appeal to students and their families in addition to the monetary value of the FAS award. Recognizing this, policymakers may design alternative aid structures to maximize positive policy outcomes while keeping the same level of funding. For example, one could reduce the FMS award level to 50% of tuition while at the same time making the FAS eligibility criteria easier to achieve, which would help retain more students on the upper tail of the SAT/ACT distribution.

Finally, while our results indicate that the Bright Futures program is quite successful in retaining students in-state and encouraging students to attend four-year institutions, there might be some unintended consequences of altered college choices. In a broad sense, students might be financially incentivized to make less than optimal choices. For example, had it not been for the merit-aid program, many students in the upper tail of the SAT/ACT distribution would have attended out-of-state institutions or in-state private institutions, which may serve their interest in the long run. As another example, the result that students who qualify for the Bright Futures program are more likely to attend in-state four-year institutions suggests that there might be some sort of student-sorting occurring among different types of institutions. This might lead to undesirable educational outcomes such as decreased diversity at four-year institutions and reduced student transfer from two-year to four-year institutions. Finally, due to its broad-based approach, more students in the middle of the SAT/ACT distribution are now attending four-year colleges than would have been the case without the financial aid incentive. It is not clear whether these students would be able to renew their [End Page 143] award in subsequent years and whether their initial choice (i.e., four-year vs. two-year colleges) would enhance or hurt their long term educational attainment.

Liang Zhang

Liang Zhang is an Associate Professor of Higher Education and Labor Studies at the Pennsylvania State University; lxz19@psu.edu.

Shouping Hu

Shouping Hu is a Professor of Higher Education at Florida State University.

Liang Sun

Liang Sun is a doctoral student at The Pennsylvania State University.

Shi Pu

Shi Pu is a doctoral student at The Pennsylvania State University.

References

Abraham, K. G., & Clark, M. A. (2006). Financial aid and students’ college decisions: Evidence from the District of Columbia tuition assistance grant program. Journal of Human Resources, 41(3), 578–610.
Avery, C., & Hoxby, C. M. (2004). Do and should financial aid packages affect students’ college choices?. In Hoxby, C. M. (Ed.), College choices: The economics of where to go, when to go, and how to pay for it (pp. 239–302). Chicago: University of Chicago Press.
Bettinger, E. (2004). How financial aid affects persistence. In C. Hoxby (Ed.), College choices: The economics of where to go, when to go, and how to pay for it (pp. 207–238). Chicago: University of Chicago Press.
Bound, J., & Turner, S. (2002). Going to war and going to college: Did World War II and the G.I. Bill increase educational attainment for returning veterans? Journal of Labor Economics, 20(4), 784–815.
The College Board (2011). Test characteristics of the SAT: Reliability, difficulty levels, completion rates. Retrieved from http://media.collegeboard.com
Cook, T. D., & Campbell, D. T.(1979). Quasi-experimentation: Design and analysis for field settings. Skokie, IL: Rand McNally.
Cornwell, C. M., Mustard, D. B., & Sridhar, D. J. (2006). The enrollment effects of merit-based financial aid: Evidence from Georgia’s HOPE scholarship. Journal of Labor Economics, 24(4), 761–786.
DesJardins, S. L., Ahlburg, D. A., & McCall. B. P. (2006). An integrated model of application, admission, enrollment, and financial Aid. The Journal of Higher Education, 77(3), 381–429.
DesJardins, S. L., & McCall, B. P. (2014). The impact of the Gates Millennium Scholars Program on college and post-college related choices of high ability, low-income minority students. Economics of Education Review, 38, 124–138.
Dorans, N. J. (1999). Correspondences between ACT and SAT I scores. ETS Research Report Series, 1999(1), i–18.
Doyle, W. R. (2006). Adoption of merit-based student grant programs: An event history analysis. Educational Evaluation and Policy Analysis, 28, 259–285.
Dynarski, S. (2000). Hope for whom? Financial aid for the middle class and its impact on college attendance. National Tax Journal, 53(3), 629–661.
Dynarski, S. (2004). The new merit aid. In C. M. Hoxby (Ed.), College choices: The economics of where to go, when to go, and how to pay for it (pp. 63–100). Chicago: University of Chicago Press.
Dynarski, S. (2008). Building the stock of college-educated labor. Journal of Human Resources, 43, 576–610.
Fan, J., & Gijbels, I. (1996). Local Polynomial Modelling and Its Applications: Monographs on Statistics and Applied Probability. London: Chapman & Hall. [End Page 144]
Hu, S., Trengove, M., & Zhang, L. (2012). Toward a greater understanding of the effects of state merit aid programs: Examining existing evidence and exploring future research direction. In John Smart (Ed.), Higher Education: Handbook of Theory and Research, vol. 27 (pp. 291–334). The Netherlands: Springer.
Imbens, G., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142(2), 615–635.
Kane, T. J. (2003). A quasi-experiment estimate of the impact of financial aid on college-going (Working Paper 9703). National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w9703
Kim, J., DesJardins, S. L., & McCall, B. P. (2009). Exploring the effects of student expectations about financial aid on postsecondary choice: A focus on income and racial/ethnic differences. Research in Higher Education, 50, 741–774.
Linsenmeier, D. M., Rosen, H. S., & Rouse, C. E. (2006). Financial aid packages and college enrollment decisions: An econometric case study. The Review of Economics and Statistics, 88(1), 126–145.
McCall, B. P., & Bielby, R. (2012). Regression discontinuity design: Recent developments and a guide to practice for researchers in higher education. In Smart, J. & Paulsen, M. (Eds.) Higher Education: Handbook of Theory and Research, XXVII (249–290). The Netherlands: Springer.
McCrary, J. (2008). Manipulation of the running variable in the regression discontinuity design: A density test. Journal of Econometrics, 142(2), 698–714.
Orsuwan, M., & Heck, R. H. (2009). Merit-based student aid and freshman interstate college migration: Testing a dynamic model of policy change. Research in Higher Education, 50, 24–51.
Scott-Clayton, J. (2011). On money and motivation: A quasi-experimental analysis of financial incentives for college achievement. The Journal of Human Resources, 46(3), 614–646.
Seftor, N., & Turner, S. (2002). Back to school: Federal student aid policy and adult college enrollment. Journal of Human Resources, 37, 336–352.
Trochim, W. (1984). Research design for program evaluation: The regression discontinuity approach. Beverly Hills, CA: Sage.
Van der Klaauw, W. (2002). Estimating the effect of financial aid offers on college enrollment: A regression-discontinuity approach. International Economic Review, 43, 1249–1287.
Van der Klaauw, W. (2008). Breaking the link between poverty and low student achievement: an evaluation of Title I. Journal of Econometrics, 142(2), 731–756.
Zhang, L. (2011). Does merit-based aid affect degree production in STEM fields? Evidence from Georgia and Florida. The Journal of Higher Education, 82(4), 389–415.
Zhang, L., Hu, S., & Sensenig, V. (2013). The effect of Florida’s Bright Futures Program on college enrollment and degree production: An aggregated-level analysis. Research in Higher Education, 746–764.
Zhang, L., & Ness, E. (2010). Does state merit-based aid stem brain drain? Educational Evaluation and Policy Analysis, 32, 143–165. [End Page 145]
Table A. Regression Discontinuity Estimates on Student Characteristics, Based on Students with Only SAT Scores
Click for larger view
View full resolution
Table A.

Regression Discontinuity Estimates on Student Characteristics, Based on Students with Only SAT Scores

[End Page 146]

Additional Information

ISSN
1538-4640
Print ISSN
0022-1546
Pages
115-146
Launched on MUSE
2015-12-09
Open Access
No
Archive Status
Archived
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.