Russell Sage Foundation

This chapter assesses the potential for several prominent technological innovations to promote equality of educational opportunities. We review the history of technological innovations in education and describe several prominent innovations, including intelligent tutoring, blended learning, and virtual schooling.

Keywords

education technology, online learning, virtual schooling

The 1966 release of the Coleman Report (Coleman et al. 1966) is widely recognized as a pivotal moment in the history of education in the United States. The report documented vast inequities in academic achievement between white and nonwhite children. Coleman and his colleagues found a great deal of racial segregation across schools along with important differences in the family resources (including factors such as parental education and household composition) available to white and nonwhite children. On the other hand, they uncovered substantially fewer differences in school resources (for example, pupil- teacher ratio and school facilities) by race. The analysis conducted by the researchers suggested that the variation in student performance was driven primarily by socioeconomic conditions in families and neighborhoods. Schools—and thus any differential resources across schools—explained relatively little of the achievement differences.

The report spurred new research and policy action aimed at improving school productivity and attempting to close the achievement gap. Although there has been progress on some fronts, many of the key findings of the Coleman Report remain true today, as is highlighted in other papers in this issue. Schools made rapid progress toward racial desegregation in the 1960s and 1970s, but that progress has either stalled or reversed since the 1980s, depending on how segregation is measured (Reardon and Owens 2014). While achievement gaps have narrowed, African American and Latino children still score roughly 13 percent lower than their Caucasian and Asian peers on standardized exams.1 In an effort to overcome [End Page 242] continued inequalities, policies have cycled in and out of favor, much like a pendulum swinging. The emphasis on test-based accountability (for example, high school exit exams) in the early 1970s reappeared several decades later in the federal accountability policy No Child Left Behind, enacted in 2002. The focus on rigorous standards in the 1980s (such as the push for states to require high school students to complete at least three years of math and science) is reminiscent of the current focus in the Common Core. And today’s push to explore new educational technologies recalls earlier efforts to introduce computers into schools (Christensen, Johnson, and Horn 2010).

New technologies are not new. Blackboards were new before they were replaced by white-boards. Slates were new, then replaced by paper and now, to some extent, computers and tablets. Filmstrips were new, and then replaced by DVDs and now web- accessed videos. In each case, the new technology brought both costs and benefits. Often it brought little change in teaching or learning. In his influential book Oversold and Underused: Computers in the Classroom, Larry Cuban (2003) argues that teachers and students use computers in schools far less frequently than commonly assumed and that the presence of computers has not changed the traditional instructional paradigm of whole-class, teacher- centered instruction. When teachers use computers, it is primarily for mundane tasks. Students write essays using word processors, practice math problems using simplistic software, or use the Internet to do web- based research. Teachers use computers to record grades, prepare lessons, and read email (Cuban 2003; Gray et al. 2010).

However, recent technological innovations have expanded the capabilities of digital learning tools in ways that boosters argue offer new potential to “disrupt” the provision of education and reduce disparities in educational opportunities (Christensen, Johnson, and Horn 2010). First, the increasing speed and availability of Internet access can reduce many of the geographic constraints that have disadvantaged poor students. Students can now access online videos that provide instruction on a wide variety of topics at various skill levels and participate in real- time video conferences with teachers or tutors located a state (or even a continent) away.2 This technology has even expanded opportunities for the long- distance professional development of teachers, enabling novice teachers to receive mentorship from master teachers regardless of distance (Dede 2006).

Second, these technologies scale easily so that innovations (or even good curriculum) can reach more students. Much like a well- written textbook, a well- designed educational software application or online lesson can reach students not just in a single classroom or school but across the state or country.

Third, advances in artificial intelligence technology now allow teachers to differentiate instruction, providing extra support and developmentally appropriate material to students whose knowledge and skill is far below grade-level norms. The latest “intelligent” tutoring systems are able to not only assess a student’s current weaknesses but also diagnose why the student is making the specific errors (Graesser, Conley, and Olney 2012). Related to this development, the explosion of “big data,” in theory, can allow researchers and program developers to utilize the experience of thousands or even millions of learners to determine more effective instructional approaches—again tailored for students with very particular needs.3

Although technologies such as virtual instruction and the suite of programs known collectively as intelligent tutoring offer great promise, they are not guaranteed to improve educational equality. Use of these technologies [End Page 243] often reduces oversight of students, and that can be particularly detrimental for children who are less motivated or who receive less structured educational supports at home. These technologies may also be less effective in engaging reluctant learners in the way a dynamic and charismatic teacher can, suggesting that even if educational technology improves quality overall, any “peak” education experience it provides may fall short of a “peak” face- to- face experience. Perhaps more importantly, technologies such as intelligent tutoring and systems that blend online and face- to- face (FtF) instruction are notoriously difficult to implement well. There is a substantial risk that they could be ineffective or even harmful in places that lack the capacity to implement the technologies with fidelity.

In this paper, we assess the potential for these “next generation” technologies to promote equality of educational opportunities. To begin, we focus on virtual instruction, which is arguably the most visible and controversial of the new technologies. Utilizing detailed administrative data from Florida, we describe which types of students are most likely to take virtual courses, and how students who take virtual courses fare in comparison with their peers taking FtF courses. We then discuss the theory behind and evidence for intelligent tutoring systems. In the final section, we discuss the implications of the findings reported here for education policy in the future.

VIRTUAL INSTRUCTION

One of the most visible examples of technology-aided learning involves virtual course- taking. An estimated 1.5 million K–12 students participated in some online learning in 2010 (Wicks 2010), and online learning enrollments are projected to grow in future years (Picciano et al. 2012; Watson et al. 2012).4 Although full- time virtual schools have grown in recent years, the vast majority of students participating in online instruction are part- time—that is, they are enrolled in a traditional brick- and- mortar school but take one or more classes online. Typically, these classes are asynchronous: students and teachers are not communicating with each other in real time through video conferencing technology. Students often take these courses outside of school (for example, at home or in a public library), although in recent years many schools have allowed students to take online courses at school during the day. Note that the online instruction we describe here is distinct from blended learning models, which combine online and FtF instruction (discussed later).

How Online Instruction Might Influence Student Outcomes

Online classes can affect students’ outcomes either by affecting their access to courses, and thereby changing their choice of courses, or by affecting the quality of the educational environment they experience. Access to online courses may change the courses that students are able to take and thus their progress through school in terms of both their accumulation of credits and the types of classes they complete. Students may benefit from being able to take additional courses online during the school year or during the summer, either for catchup or for enrichment. With regard to enrichment, smaller and poorer high schools tend to have fewer Advanced Placement (AP) offerings, elective courses, and foreign language courses compared to larger schools with better resources (Barker 1985; Pufahl and Rhodes 2011). As discussed in other papers in this issue, this raises several concerns. First, the lack of advanced-level course availability has implications for students in low- income and minority schools when they transition to college (Schneider and Saw, this issue). Second, even when such courses are available, social boundaries may stymy nondominant groups’ participation in them (Carter, this issue). Access to virtual courses could help alleviate both of these concerns. With regard to the first concern, students who fail a course during one school year may opt to take that course online in lieu of attending summer school or repeating the course the following school year (Cooper et al. 2000; Watson and Gemin 2008). Moreover, virtual schooling [End Page 244] can provide some consistency of course access for highly mobile populations or students who must spend time away from their traditional brick- and- mortar school because of health, incarceration, or other personal situations.

The best evidence on whether simply improving access to different courses through virtual schools affects students’ academic outcomes comes from a large- scale random assignment study carried out in Maine and Vermont (Heppen et al. 2012). Sixty-eightschools that had not historically offered Algebra I to eighth- graders were randomly assigned to either a treatment group, which was given access to an online Algebra I course, or a control group, which did not receive access. Algebra-ready students in treated schools showed improvements on test scores and took more advanced courses in high school (Heppen et al. 2012). Although these results are encouraging, this efficacy study took place under idealized conditions—selected students were particularly advanced, and virtual classes were held during the school day with an on- site proctor who, in 80 percent of the schools, was a math teacher.

As described earlier, online instruction may also influence the quality of the educational environment in several ways. Individuals teaching online courses may be more or less effective than their counterparts teaching FtF courses. One’s peers in an online course may be different than in a FtF course, and perhaps more importantly, it seems likely that peer effects would be less pronounced in an online setting. Finally, it is likely that curricular and instructional approaches differ, given both the constraints and opportunities of online courses relative to those in traditional classrooms. The online platform may allow for course characteristics that are simply not possible in the FtF environment. For instance, students can work at their own pace (Anderson 2008), and if they do not understand key concepts in lectures or become distracted, they can replay the lectures to bolster their understanding. Moreover, the setup of online courses may allow the same material to be presented in multiple ways to best match a student’s learning style.

The online platform may also provide opportunities for planning, oversight, and uniformity that are far more difficult in FtF classrooms. Curriculum specialists can plan the course, including quite detailed scripts for teachers. Teachers can implement these spec-ified curricula, focusing their time and skills on responding to students’ questions and needs. For this reason, the quality of courses may be much more homogeneous in virtual settings than in brick- and- mortar classes, though course quality may depend on the quality of the curriculum planning team. Insofar as we are concerned about the potential for the “teaching to the test” behaviors discussed by Jennifer Jennings and Douglas Lauen (this issue), this homogeneity of virtual courses makes them less likely to be impacted by local accountability pressures. On the other hand, FtF courses provide opportunities for interaction with peers and teachers that are not available in the online environment. The proximity of teachers and students in FtF settings may also make it easier for teachers to monitor students’ work, keep them on task, or read facial clues to determine whether students are confused about course concepts (Anderson 2008). Classes in one environment may meet the same needs met in the other environment, but the process may be more difficult.

The extent to which a student benefits from a virtual class is likely to depend on the characteristics of the individual student. For instance, the benefits of being able to repeat material at a slower pace might be more pronounced for low- achieving students. Non-native English speakers might benefit from online instruction that allows them to pause and look up unfamiliar words. For each of these groups, plausible stories could be told in the opposite direction as well.

The utility of taking online courses is also likely to vary based on the counterfactual conditions that individual students would experience in the absence of the virtual option. For instance, we might expect that even if there were no differences across sectors in average teacher (or peer) quality, the option to take a virtual course with an average teacher (or an average- ability peer group) might be more advantageous to a student attending a brick-and-mortar school with very low- quality teachers [End Page 245] (or very low- achieving peers). This would suggest that a potential benefit of the expansion of virtual courses could be to reduce the inequality in educational opportunities for more affluent versus poorer students, given that past research indicates that high- poverty schools tend to be staffed by teachers with less experience and lower value- added scores (Sass et al. 2012; Boyd et al. 2008) and that, within schools, classes with a higher share of low- achieving, poor, and minority students are most likely to receive novice teachers (Kalogrides and Loeb 2013).

Despite heated debate in the policy realm, there has been little rigorous research examining the effect on student achievement of online courses in comparison to FtF courses. The majority of research on the impacts of online course–taking comes from studies at postsecondary institutions. There have been several careful randomized control trials to compare learning for college students in FtF classes versus hybrid delivery models. These studies tend to find either null results (Bowen et al. 2014) or modest benefits to FtF instruction (Figlio, Rush, and Yin 2013; Joyce et al. 2015).5 These studies, however, look only at a limited range of classes (for example, one section of statistics or economics) and tend to be based in selective institutions. Other studies use quasi-experimental methods to explore the impact of virtual course–taking at less elite, broad-access institutions. Studies of public community colleges in a variety of states (Xu and Jaggars 20112013; Hart, Friedmann, and Hill 2014; Streich 2014) and of for- profit broad- access institutions operating nationally (Bettinger et al. 2015) consistently find poorer outcomes for students who take online courses. However, given the greater latitude that postsecondary instructors generally have to develop courses, online course–taking may have different effects for K–12 students.

Unfortunately, little evidence of the effects of online learning exists at the K–12 level. Indeed, a recent meta- analysis of online learning has found only five studies that compare students in K–12 online courses to an FtF alternative that features an experimental or quasi-experimental design and includes sufficient information to be included in a meta- analysis (Means et al. 2010). Of these studies, all use blended rather than fully online instruction. The authors find no significant differences between blended and FtF alternatives in K–12 settings, a finding echoed in other meta- analyses with slightly less stringent inclusion criteria (Cavanaugh et al. 2004).

The best test of whether online coursework boosts student learning for K–12 students comes from a randomized controlled trial for students taking Algebra I. Cavalluzzo et al. (2012) compare a hybrid Algebra I curriculum implemented in thirteen high schools in Kentucky to an FtF curriculum and find no evidence of a difference in learning. Although the Kentucky study provides compelling evidence with regard to this particular course and context, the results from this context may not generalize. We return to the evidence on blended learning models in the following section.6

Online Instruction in Florida

To shed light on some of these unanswered questions, we examine virtual course–taking in Florida. Florida is a sensible location for studying online learning because it is one of only a few states that require students to take at least one online course in order to receive a high school diploma (Watson et al. 2012). This requirement can be met through an online course offered by the Florida Virtual School (FLVS, a virtual education provider approved by the State Board of Education and the largest virtual course provider in Florida), a high school, or an online dual- enrollment course. Florida’s virtual schools are subject to many of the same regulations that FtF schools face. In order for the state to pay for classes, curricula in virtual schools must be aligned to the state’s standards and teachers must be fully credentialed in Florida. Also, like brick- and- mortar schools, [End Page 246] virtual schools that provide state- funded full-time education for students receive grades through Florida’s accountability system.7

The vast majority of students taking online courses in Florida do so through FLVS, a public school founded in 1997 that provides courses for both full- time and part- time online students. Most commonly, students access online courses at home or another location with broadband access such as a public library. FLVS fills courses on a first- come, first- served basis. Each course in FLVS has a maximum enrollment size, and FLVS fills courses sequentially—that is, students are assigned a particular teacher until that teacher reaches his or her enrollment cap, at which point FLVS opens another “section” of the course (Teresa King, FLVS, personal communication, July 2013). Students are accepted on a rolling basis, and they can work at their own pace. Given the flexibility of course pacing, FLVS instructors tend to be “on call” from 8:00 AM to 8:00 PM, Monday through Sunday (Bakia et al. 2011). Indeed, FLVS teachers are required to respond to any student query within twenty- four hours and to return completed assignments with feedback within forty- eight hours (Teresa King, FLVS, personal communication, July 2013).

FLVS teachers are not unionized, but they are paid on the basis of a traditional salary schedule comparable to other public districts, with salary increases for experience and additional education. FLVS teachers complete a one- week in- person training session upon induction and receive thirty hours of additional training (delivered virtually) each year.

FLVS maintains tight control over the curriculum presented to students (Teresa King, personal communication, August 9, 2013). The school’s Curriculum Services Department designs the curriculum and student assignments, and teachers have little latitude to alter the assignments. All courses include discussion-based assignments in which students talk with teachers (by phone) about the material so that the instructor can assess student understanding and clarify questions in real time.8 All students in the same course take centrally designed exams in which questions drawn from a test bank are randomly presented to students. Final exams are proctored only if teachers raise concerns about academic integrity.

In recent years, many public schools have begun offering virtual courses during the school day within the school building (for example, in a computer lab or school library). In such cases, the course is described as a virtual learning lab (VLL).

Data and Sample

We draw on data from two main sources for our examination of virtual course–taking in Florida. To characterize growth across all sectors, we draw on student enrollment data from the Florida Virtual School. All other tables rely on data from the Florida Department of Education (FDOE). Using FDOE data, we assembled a student- level longitudinal data set for all public school students in Florida from 2005–2006 through 2013–2014. Because our data are drawn from high school transcripts, we limit our sample to students in grades 9 to 12 who attend traditional, charter, or magnet public schools.9 Because a subset of the variables we construct rely on next- year outcomes, many of our results will focus on the 2012–2013 school year. In that year, we observe 6,501,111 course enrollments taken by 801,480 students.

The FDOE high school transcript data provide information on the institutions that provide instruction for each class, allowing us to identify courses provided by virtual schools. In addition, the FDOE data include demographic background characteristics (student sex, race-ethnicity, subsidized lunch use); classification in special programs (limited English proficiency, special education, gifted programs); [End Page 247] and student outcomes (statewide standardized test scores and grades) for all students.

To obtain school characteristics, we merge data on students’ home institutions (the brick-and- mortar institutions in which students are enrolled, also sometimes called their “enrollment institutions”) from the Common Core of Data (CCD) files maintained by the National Center for Education Statistics (NCES). Specifically, NCES data are used to characterize schools’ urbanicity, charter or magnet school status, total enrollment, and share of students using free or reduced- price (FRL) lunch. We describe several sources of data on specific measures of in- school and out- of- school access to technological resources later in the paper.

The Distribution of Resources Across Schools in Florida

Before analyzing virtual course–taking in Florida, it is useful to review how resources are distributed across schools in the state. The Coleman Report documented dramatic differences in the 1960s in resources such as spending and class size across schools attended by black and white children. But many things have changed since the time of the report.

Table 1 presents descriptive statistics on the distribution of various resources for students across quartiles based on the share of the student body using subsidized meals. For example, quartile 1 includes schools with the lowest fraction of students eligible for subsidized meals in the sample—namely, fewer than 36 percent of students. Conversely, quartile 4 includes schools with the highest fraction of students eligible for subsidized meals—at least 71 percent of low- income students. The fifth column in the table gives the F-statistic and p-value for regression- based tests of whether there are differences across quartiles in the extent to which schools offer each resource.

As measures of traditional resources, we focus on teacher advanced degree–holding, teacher experience, and school student- to-teacher ratios. There is considerable evidence that teacher experience, particularly in the first few years, is strongly correlated with student achievement (Nye, Konstantopoulos, and Hedges 2004; Rivkin, Hanushek, and Kain 2005). Although many studies fail to find a similar benefit to teacher advanced- degree receipt (Harris and Sass 2011; Rivkin, Hanushek, and Kain 2005; but see Clotfelter, Ladd, and Vigdor 2007), we view this as the best available proxy for unmeasured teacher quality. Student- to-teacher ratios proxy for school class sizes; evidence suggests that achievement is enhanced by smaller class sizes (Angrist and Lavy 1999; Jepsen and Rivkin 2009; Krueger 1999; but see Hoxby 2000).

High- poverty schools have significantly fewer teachers with advanced degrees and significantly more teachers with three or fewer years of experience. For example, roughly 45 percent of teachers in the most advantaged schools have advanced degrees compared with fewer than 40 percent of teachers in the highest-poverty schools. About 31 percent of teachers in quartile 1 schools are novices compared with nearly 37 percent in quartile 4 schools. However, low- SES schools have lower student-to- teacher ratios than do higher- SES schools, probably because of the supplemental funding provided to these schools.

The success of virtual instruction requires access to the appropriate technology. Given the inequitable distribution of traditional resources across schools, one should naturally be concerned about the distribution of technology access. We use two sources of data to establish the technological resources available at the school level. The first is the October 2014 report on connectivity capability in Florida, “Community Anchor Institutions,” including K–12 schools, conducted by the Florida Department of Management Services (FLDMS) and provided to the National Telecommunications and Information Administration (NTIA) for its State Broadband Initiative (SBI) (Florida Department of Management Services 2014). The report provides maximum download speeds for the service to which each institution subscribes. We dichotomize this to capture whether schools report download speeds of 100 megabits per second or greater; this is the median download speed reported for schools, and it corresponds roughly with what experts view is the minimum acceptable speed for networking.

The second source of data comes from the fall 2014 “Technology Resources Inventory” [End Page 248]

Table 1. Student Access to Resources by Quartile of School (Fraction of Students Using Free or Reduced-Price Lunch, 2012–2013)
Click for larger view
View full resolution
Table 1.

Student Access to Resources by Quartile of School (Fraction of Students Using Free or Reduced-Price Lunch, 2012–2013)

[End Page 250]

surveys collected by the Florida Department of Education (Florida Bureau of Educational Technology 2014).10 These surveys ask schools to report on their technology environment, including the source and speed of Internet at the school. We create two measures of in- school technological resources from this survey. The first is a measure of computers per student. Schools report the number of desktop and mobile computers in the school that are used for student instruction and that meet certain minimum technical standards in terms of memory, processing speeds, and so on.11 We standardize this measure by the school enrollment. The second is a measure of wireless service in the school. The Technology Resources Inventory surveys ask schools to report the number of IEEE 802.11n compliant wireless access points in the building. Wireless access points allow wireless devices to connect to wired networks using Wi- Fi. This measure is standardized by the number of classrooms in the building as a proxy for the physical space that the wireless access points are working to cover.

Table 1 suggests that there is less discrepancy across socioeconomic categories for technological resources than for nontechnological ones. Indeed, high- poverty schools have more computers per student than do lower- poverty schools. Few of the other resources have clear relationships to socioeconomic status.

Success in online courses is likely to depend, at least in part, on access to high- speed Internet outside of school. And here the so-called digital divide might be an important constraint on the ability of virtual instruction to reduce the achievement gap. Our supplemental calculations on home Internet access among households with school-age children (five to eighteen) using 2013 and 2014 American Community Survey (ACS) data (Ruggles et al. 2015) suggest that affluent children are more likely to have home access to high- speed Internet than their low- income peers. Among house-holds with family income at or below the poverty line, 42.6 percent lack access to high- speed Internet options (including DSL, cable Internet service, satellite Internet service, fiber- optic Internet service, or mobile broadband plans). Over 20 percent of households above the poverty threshold but still below the threshold for subsidized lunch eligibility (185 percent of the federal poverty line) lack high- speed Internet access, while fewer than 10 percent of households not eligible for free or reduced- price lunch lack such access.

Because, unfortunately, we do not have data on each student’s access to high- speed Internet at home, we rely on two school- level proxies. In the Technology Resources Inventory surveys, school administrators are asked to estimate the fraction of students in their school who have access to high- speed Internet at home. We supplement this information with information on the geographic distribution of broadband providers collected by the NTIA and the Federal Communications Commission (FCC) (U.S. Department of Commerce 2014). Following Lisa Dettling, Sarena Goodman, and Jonathan Smith (2015), we aggregate block-level information on residential broadband providers to create the population- weighted number of providers of residential broadband service in the school’s ZIP code, which serves as our measure of at- home access to high-speed Internet at the school level.

Out- of- school technological resources show clearer relationships to socioeconomic status. Schools with higher- SES student bodies estimate that a larger percentage of their students have access to the Internet outside of school; the estimated rate of out- of- school Internet access is 85 percent in the lowest- poverty schools versus 58 percent in schools with the highest poverty rates. Schools in higher- poverty areas are also less likely to be located in ZIP codes with at least one broadband provider per 2,700 people. Nearly 75 percent of low- poverty schools [End Page 251] are located in broadband- rich communities versus about 65 percent of the highest- poverty schools.

Figure 1. Change over Time in Florida Virtual School Enrollments Source: Authors’ calculations from FLVS data.
Click for larger view
View full resolution
Figure 1.

Change over Time in Florida Virtual School Enrollments

Source: Authors’ calculations from FLVS data.

Findings

Virtual course enrollments have expanded dramatically in the last decade. Figure 1 illustrates this using FLVS data from 2005–2006 through 2012–2013 for four different schooling sectors: public schools, private schools, charter schools, and home schools. We see dramatic enrollment growth over this period, particularly among public school students. For example, the number of total enrollments in virtual courses across all school types grew from just under 50,000 in 2006 to roughly 350,000 by 2013, with public school enrollments in virtual courses accounting for roughly three- quarters of all enrollments in the last year.

Virtual course–taking rates appear roughly constant across the core academic subject areas in 2012–2013, with math, social studies, English language arts, foreign language, and science each accounting for 9 to 15 percent of virtual course enrollments (table 2). Interestingly, physical education and driver’s education are also among the most popular virtual courses, accounting for 4 and 14 percent of enrollments, respectively. Note that while each of the subjects listed has seen explosive growth in enrollments over time, the growth is especially marked in some areas; foreign languages, for instance, had more than a 1,000 percent increase in enrollments from 2005–2006 to 2012–2013.

During the 2012–2013 school year, nearly 21 percent of students took at least one virtual course. Virtual courses constituted about 4 percent of total course enrollments, suggesting that students who take any virtual courses take about one out of five of their courses online.12 Table 3 presents virtual course–taking rates separately by student and school characteristics.

Students who were more advantaged, both academically and economically, appear to have been more likely to take virtual courses. For example, only 17.6 percent of students who were eligible for subsidized meals took a virtual course, and only 13.1 percent of students receiving special education services did so (column 1). By contrast, over 27 percent of gifted students took virtual courses. This finding is echoed by differences in virtual course–taking [End Page 252] based on students’ eighth- grade standardized test scores. Students are characterized according to the quartile into which their average standardized math and reading scores fall. Nearly 27 percent of students scoring in the top quartile of eighth- grade standardized tests (not shown) took a virtual course compared with only 14 percent of students in the bottom quartile, and the likelihood of taking any virtual class increased monotonically with prior achievement quartile. African American and Latino students were significantly less likely than average to take a virtual class in 2012–2013, and Asian students were significantly more likely than average to take one. The pattern of results is nearly identical using enrollment- weighted estimates (column 2): high- achieving students and higher- income students took a higher share of their courses virtually compared to their lower- achieving and less affluent peers.

Table 2. Florida Virtual School Course Enrollments in Different Subject Areas, 2006 and 2013
Click for larger view
View full resolution
Table 2.

Florida Virtual School Course Enrollments in Different Subject Areas, 2006 and 2013

Students in traditional public schools were the most likely to take at least one course online (22.62 percent); virtual course–taking was less prevalent in charter (20.68 percent) and magnet (18.65 percent) schools. Rural students had the lowest prevalence of virtual course–taking on both measures. Mirroring the student-level results, we see that the poorest schools had the lowest rates of virtual course–taking. Virtual course–taking was also somewhat more prevalent among students with access to higher- quality teachers, measured by on- paper credentials. In particular, virtual course–taking was more prevalent in schools with higher concentrations of novice teachers (18.5 percent) versus those with lower concentrations of novice teachers (22.38 percent).

Surprisingly, in- school technological resources had little relationship to online course–taking. Indeed, students in schools with more computers per student were actually somewhat [End Page 253]

Table 3. Virtual Class–Taking Prevalence in Florida in 2012–2013
Click for larger view
View full resolution
Table 3.

Virtual Class–Taking Prevalence in Florida in 2012–2013

less likely to take online courses. Our outof- school proxy measures were more predictive of online course–taking. Students from schools where over 75 percent of students were estimated to have out- of- school access to the Internet were more likely (22.4 percent) to have taken at least one online course than were students in schools with less estimated home Internet access (18.6 percent). Likewise, students attending schools in ZIP codes with greater residential broadband provision (at least one provider per 2,700 people) had higher virtual course–taking rates (21.5 percent) than did students in more sparsely serviced areas (18.9 percent).

Although we see students from all different school and family backgrounds taking virtual courses, they may be differentially likely to do so based on their reasons for taking a particular class. To explore this possibility, table 4 breaks down the share of virtual class enrollments by the reason for attempting these classes. We distinguish four types of attempts, which we impute based on whether students had previously taken the same class and past performance in the class if it was previously taken. Classes are designated as “first attempts” if students had never taken the same course in any previous year. “Credit recovery” classes are flagged when students had taken the same course in a previous year and received a failing grade. “Grade improvement” is flagged if students had taken the same course in a previous year and received a D grade but never an F. “Other attempts” (not shown) are flagged when students had taken the same course across multiple years but there was no evidence that they had done so owing to poor prior performance.13

The top panel presents the prevalence of [End Page 255]

Table 4. Florida Virtual Class Enrollments in 2012–2013 Conditional on Attempt Type
Click for larger view
View full resolution
Table 4.

Florida Virtual Class Enrollments in 2012–2013 Conditional on Attempt Type

[End Page 256]

course-taking conditional on attempt type for all students (column 1) and by student characteristics of interest, including subsidized lunch use (free or reduced- price lunch versus full-price lunch); race (black, Hispanic, and white or Asian); and prior achievement (students in the highest and lowest Florida Comprehensive Assessment Test [FCAT] achievement quartiles). Among the full population of students, we see that students took the smallest share of their first attempts at a course virtually. Only 3.7 percent of first attempts at classes were taken virtually, compared to over 13 percent of attempts at credit recovery and nearly 12 percent of attempts at grade improvement.

Columns 2 to 7 give these breakdowns for different subtypes of students. Across all course types, students on subsidized lunch took a lower share of their courses virtually than did their more affluent peers. In some cases, the differences are quite sizable: more affluent children took over twice as many virtual courses in credit recovery attempts compared to lower-income children, and over three times more in grade improvement attempts. White and Asian children also took a higher share of virtual courses across all class types compared to their black and Hispanic peers. The gaps are most dramatic when comparing lower- achieving and higher- achieving children. Students in the highest quartile of FCAT performance were nearly four times more likely to make their credit recovery attempts virtually and were roughly eighteen times more likely to make their grade improvement attempts virtually.14

The bottom panel presents the share of virtual courses taken within each attempt type for five different types of courses. “Core subjects” includes math, science, social studies, and English language arts. “Foreign languages” covers foreign language offerings, and “life skills” includes health, physical education, and driver’s education classes. “Other electives” includes all other subjects. We removed Advanced Placement (AP) and International Baccalaureate (IB) classes from these four types of courses; these accelerated options are presented separately as “AP/IB” classes.

Virtual course–taking is not equally prevalent in all course areas. Aggregating all attempt types (row 5), life skills and foreign language courses were most often taken virtually—roughly 8 to 10 percent. By contrast, only about 3.5 percent of core courses and 3 percent of other elective classes were taken virtually. Virtual course–taking was very uncommon for AP/IB classes: fewer than 1 percent of students took their AP and IB courses virtually.

There are a few surprising patterns when these results are broken down by course attempt types. For instance, though fewer than 1 percent of AP classes were taken virtually, a very high share of grade improvement (38 percent) and credit recovery (24 percent) attempts in AP/IB classes were made virtually.15 Likewise, nearly one- third of credit recovery attempts and over 40 percent of grade improvement attempts in foreign language courses were made virtually. This suggests that virtual classes serve different purposes for students depending on the class type.

In an effort to distinguish the association between virtual course–taking and specific student and school characteristics, table 5 presents estimates from OLS regressions. The unit of observation in these regressions is a student-course, so in most cases there will be multiple observations for each student. We predict the likelihood that a course will be taken virtually given the characteristics of the student taking the class and the student’s home institution. Standard errors are all clustered at the school (home institution) level, which subsumes all observations for each student.

Each column reflects the results from a separate regression, with the sample indicated in the top row. Columns 1 and 2 focus on all course types and all attempt types shown in [End Page 257]

Table 5. Predictors of Online Course-Taking, 2012–2013
Click for larger view
View full resolution
Table 5.

Predictors of Online Course-Taking, 2012–2013

[End Page 258]

table 3. Consistent with the group comparisons presented in table 3, we see that subsidized lunch use is negatively associated with virtual course–taking, as is black and Latino race- ethnicity and limited English proficiency. Gifted students were more likely than non-gifted students, and special education students less likely than non- exceptional students, to take virtual courses. Prior achievement, measured by the average of the student’s standardized eighth- grade math and English language arts FCAT scores, is positively associated with virtual course–taking. Courses were less likely to be taken virtually in charter and magnet schools than in traditional public schools, and less likely to be taken virtually in rural schools than in suburban schools. School size is negatively associated, and the average eighth-grade achievement of the school’s student body is positively associated, with the likelihood that a course would be taken virtually. Results are substantively similar in terms of both the direction and significance of coefficients when we use school fixed effects to determine which student factors predict virtual course–taking, comparing students to their peers in the same school (column 2). Specifications that focus on characteristics that predict core academic [End Page 259] classes being taken virtually on the first attempt (column 4) are also consistent with those in column 1, with low-income, male, lower-achieving, special education, and black and Hispanic students being less likely to take these courses virtually.

Given that virtual course–taking is especially common for credit recovery attempts, we wanted to explore which student characteristics most strongly predict the use of virtual courses for credit recovery holding other factors constant (column 5). Although the pattern of results is largely the same as the pattern in the first three columns, the magnitude of the coefficients is substantially larger. For instance, while subsidized lunch use is associated with only a 0.7-percentage-point reduction in the likelihood of taking a given course virtually across all classes and attempt types (column 1), it is associated with a nearly 6.0-percentage-point reduction in the likelihood of making credit recovery attempts virtually (column 4).

Two competing interpretations may emerge from these results. In one interpretation, the greater uptake of virtual courses for credit recovery by affluent and high-achieving students may be evidence that they are using virtual classes more strategically. To the extent that advantaged students are better poised to access the potential benefits of virtual courses for credit recovery, these differential patterns in uptake could worsen inequality. A second interpretation is that students have a good read on which course delivery formats are most likely to work for them: if lower-achieving and relatively disadvantaged students accurately perceive that they would benefit more from face-to-face instruction than from virtual instruction, the differential patterns in uptake would not be worrisome.

Column 6 focuses on AP/IB courses. Unlike in our other specifications, we find few characteristics that predict the likelihood that a student will take AP/IB courses virtually: Asian students and students with higher prior achievement were more likely—and charter students, students from larger schools, and rural students less likely—to take AP courses virtually, all else held constant. The latter result is especially surprising. Because rural schools are less likely to be able to offer a full suite of AP courses, we had anticipated that rural students might be especially likely to pursue advanced courses online.16

Although disadvantaged students are somewhat less likely to take virtual courses, online instruction might be more beneficial for these students for any of the reasons discussed in the prior section. A complete causal analysis of the relationship between virtual instruction and student achievement is beyond the scope of this paper, but we present several figures that illustrate how outcomes differ by mode of instruction for two popular core academic classes: Algebra I and World History. For this analysis, we limit our sample to students making their first attempt to take these courses in 2012–2013. We further exclude students who were taking these courses at an unusual point in their academic career, such as twelfth-graders taking Algebra I for the first time. Students are characterized according to whether they were observed in any term in a virtual section of the course under consideration. That is, if they were observed in a FtF Algebra I section in one term and a virtual Algebra I section the next, they appear only in the “ever-virtual” column. Students who took all face-to-face classes in the relevant course are considered “only-FtF.”

We examine student performance in the next course in the sequence, which we identify by examining high school transcripts for all students in Florida. For Algebra I, the next course is Geometry. For World History, the next course could be any of the following: U.S. History, U.S. Government, or Economics. Grades are reported on a standardized four-point scale. We compare the cumulative perfect frequency distributions for virtual vs. FtF students; at each grade point, the figures depict the share of virtual (or FtF) students who received that grade or lower.

Figures 2, 3, 4, and 5 show the distribution of subsequent course grades for students taking Algebra I virtually versus face-to-face, separately by quartile of eighth-grade math and reading scores. Specifically, we group ninth-grade students who took Algebra I in 2012–2013 into quartiles based on the average of their [End Page 260]

Figure 2. Florida Students’ Next-Course Grade, Algebra 1, 2012–2013, All Students Source: Authors’ calculations from FDOE data. Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course is Geometry.
Click for larger view
View full resolution
Figure 2.

Florida Students’ Next-Course Grade, Algebra 1, 2012–2013, All Students

Source: Authors’ calculations from FDOE data.

Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course is Geometry.

Figure 3. Florida Students’ Next-Course Grade, Algebra 1, 2012–2013, Free or Reduced-Price Lunch Students Source: Authors’ calculations from FDOE data. Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course is Geometry.
Click for larger view
View full resolution
Figure 3.

Florida Students’ Next-Course Grade, Algebra 1, 2012–2013, Free or Reduced-Price Lunch Students

Source: Authors’ calculations from FDOE data.

Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course is Geometry.

Figure 4. Florida Students’ Next-Course Grade, Algebra 1, 2012–2013, Quartile 1 (Lowest) FCAT Students Source: Authors’ calculations from FDOE data. Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course is Geometry.
Click for larger view
View full resolution
Figure 4.

Florida Students’ Next-Course Grade, Algebra 1, 2012–2013, Quartile 1 (Lowest) FCAT Students

Source: Authors’ calculations from FDOE data.

Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course is Geometry.

Figure 5. Florida Students’ Next-Course Grade, Algebra 1, 2012–2013, Quartile 4 (Highest) FCAT Students Source: Authors’ calculations from FDOE data. Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course is Geometry.
Click for larger view
View full resolution
Figure 5.

Florida Students’ Next-Course Grade, Algebra 1, 2012–2013, Quartile 4 (Highest) FCAT Students

Source: Authors’ calculations from FDOE data.

Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course is Geometry.

eighth-grade math and reading scores. Overall, it appears that students who took the course virtually did slightly better than students who took the course face-to-face. However, if we look separately by prior eighth-grade performance, we see that bottom-quartile students did worse in Geometry if they took the course virtually, while top-quartile students performed somewhat better if they took the course virtually. One complication in these results is that a lower share (64 percent) of virtual Algebra I students were observed taking Geometry compared to FtF students (72 percent), suggesting that virtual students who appear in Geometry may be a more positively selected group.

Results are more positive for virtual course–taking among World History students (figures 6, 7, 8, and 9). Although comparable shares of virtual and FtF students were observed in follow-on courses (roughly 68 percent in each sector), virtual students slightly outperformed their FtF peers in each of the samples studied. Moreover, the advantages were more pronounced—though still modest—for virtual students who qualified for free or reduced [End Page 261]

Figure 6. Florida Students’ Next-Course Grade, World History, 2012–2013, All Students Source: Authors’ calculations from FDOE data. Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course includes U.S. History, U.S. Government, or Economics (regular or honors).
Click for larger view
View full resolution
Figure 6.

Florida Students’ Next-Course Grade, World History, 2012–2013, All Students

Source: Authors’ calculations from FDOE data.

Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course includes U.S. History, U.S. Government, or Economics (regular or honors).

Figure 7. Florida Students’ Next-Course Grade, World History, 2012–2013, Free or Reduced-Price Lunch Students Source: Authors’ calculations from FDOE data. Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course includes U.S. History, U.S. Government, or Economics (regular or honors).
Click for larger view
View full resolution
Figure 7.

Florida Students’ Next-Course Grade, World History, 2012–2013, Free or Reduced-Price Lunch Students

Source: Authors’ calculations from FDOE data.

Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course includes U.S. History, U.S. Government, or Economics (regular or honors).

Figure 8. Florida Students’ Next-Course Grade, World History, 2012–2013, Quartile 1 (Lowest) FCAT Students Source: Authors’ calculations from FDOE data. Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course includes U.S. History, U.S. Government, or Economics (regular or honors).
Click for larger view
View full resolution
Figure 8.

Florida Students’ Next-Course Grade, World History, 2012–2013, Quartile 1 (Lowest) FCAT Students

Source: Authors’ calculations from FDOE data.

Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course includes U.S. History, U.S. Government, or Economics (regular or honors).

Figure 9. Florida Students’ Next-Course Grade, World History, 2012–2013, Quartile 4 (Highest) FCAT Students Source: Authors’ calculations from FDOE data. Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course includes U.S. History, U.S. Government, or Economics (regular or honors).
Click for larger view
View full resolution
Figure 9.

Florida Students’ Next-Course Grade, World History, 2012–2013, Quartile 4 (Highest) FCAT Students

Source: Authors’ calculations from FDOE data.

Notes: Grade = grade in next course on four-point scale. FCAT = Florida Comprehensive Achievement Test. Quartile based on averaged reading and math eighth-grade standardized scores. The next course includes U.S. History, U.S. Government, or Economics (regular or honors).

price lunch and for students with low eighth-grade FCAT scores than they were for their higher-achieving peers.

In interpreting these differences, it is important to keep in mind that students are rarely randomly sorted into a virtual course. In most cases, a student can decide whether or not to take a course virtually, and this decision is likely to be determined by many unobservable as well as observable factors. For this reason, we do not interpret these figures as reflecting the causal impact of the instructional mode. In future work, we plan to estimate more rigorously the causal impact of virtual instruction.

BEYOND VIRTUAL INSTRUCTION

Our analysis of virtual schooling in Florida suggests that virtual course–taking has the potential to be scaled up for broader use. While traditionally less advantaged student groups—lower-income, nonwhite, lower-achieving—are somewhat less likely to take virtual courses, [End Page 262] the differences remain relatively modest. Moreover, a large and growing proportion of traditionally disadvantaged students do enroll in and complete online courses.

However, more equal access to online course–taking may not meaningfully affect achievement gaps if course quality is not superior in online courses. We find mixed (descriptive) evidence on the subsequent performance of virtual versus face-to-face students. While virtual course–taking in World History is positively associated with performance on subsequent social science classes, results are more ambiguous for Algebra: free and reduced-price lunch students did no better on average, and students with low prior achievement did worse, in Geometry after taking virtual courses. Although the evidence we have presented should certainly not be interpreted causally, it seems unlikely that the causal impact is large and positive considering that students taking virtual courses are positively selected (on observables) and thus, if anything, the unconditional comparisons we present may overstate the benefits of virtual instruction for disadvantaged students. Unless the benefits are large and positive, at current uptake levels it is unlikely that virtual schooling will do much to improve educational inequality.

We turn now to explore evidence on a distinct set of educational technologies that could be incorporated in either virtual or face-to-face courses to produce better opportunities for disadvantaged children. Known as computer-aided instruction (CAI) or intelligent tutoring systems (ITS), these technologies are designed to quickly diagnose and target student needs. In this section, we describe these systems, discuss how they might promote student learning, and then review the evidence on their effectiveness.

The Theory and Development of Computer-Aided Instruction

Broadly speaking, computer-aided instruction refers to any computerized learning environment in which computer software provides instruction, practice, and timely feedback to students. However, the earliest CAI was not much more than a computerized textbook that provided predeveloped content with very little interactivity. Gradually these programs became more flexible, providing relevant content in response to student inputs (Nwana 1990). As they began to leverage more sophisticated artificial intelligence technology, these programs became known as intelligent tutoring systems (ITS).

Intelligent tutoring systems rely on the interaction between its domain and pedagogical models and a dynamically updated student model (Conati 2009). As a student works through problems, completed steps, missteps, hint requests, and so on, are used to update the student model and estimate the student’s understanding; after this estimate is compared against domain knowledge models to determine gaps, the pedagogical model can intelligently implement tutoring strategies to fill in these gaps (Graesser, Conley, and Olney 2012). A fundamental development in newer-generation ITS is the ability to diagnose student errors and build remediation from these diagnoses (Shute and Psotka 1994). Newer intelligent tutoring systems also focus on smaller “pieces” of the learning process, emphasizing individual steps that students must take to solve a problem (VanLehn et al. 2005).

ITS might be expected to influence student learning in several ways. Perhaps most importantly, the growing sophistication of ITS may provide teachers with an opportunity to tailor content and instructional techniques to each student’s individual needs.17 This type of “differentiated instruction” is often cited by researchers and practitioners as the key to effective teaching, particularly for disadvantaged students whose performance might be quite far below that of their peers and expected grade-level standards. Second, the different modes of instruction available through videos and online formats might be better able to engage students (Ma et al. 2014). One example is the emphasis on “game-based” learning. Third, the use of intelligent tutoring systems that are constantly collecting and analyzing data on student performance could encourage the use of data to guide instruction more broadly. [End Page 263]

Fourth, these technologies might provide all students with access to high-quality content. Like virtual instruction, intelligent tutoring systems rely on centrally developed curricular content and instructional techniques. This type of specialization should, in theory, allow for more meticulous planning and development of material, including quite detailed scripts for teachers. As with virtual instruction, then, ITS could produce a high-quality classroom experience—and should produce a relatively uniform one—for students from a broad range of backgrounds.

New technologies offer the possibility of improving instruction in all these ways, but they also have important limits. Perhaps most importantly, approaches that completely forgo direct interpersonal interaction are unlikely to be able to teach certain skills. Learning is an inherently social activity. While an intelligent tutoring system might be able to help a student master specific math concepts, it may not be able to teach students to critically analyze a work of literature or debate the ethics of new legislation.

The recent experience of Rocketship, a well-known charter school network, illustrates this concern. Developed in the Bay Area of California in 2006, Rocketship’s instructional model revolves around a blended learning approach in which students spend a considerable amount of time each day engaged with computer-aided learning technologies. The network received early praise for its innovative approach to learning and, most importantly, for the high achievement scores posted by its mostly poor, nonwhite student population (Schorr and Mc-Griff 2011). In 2012, however, researchers and educators raised concerns about graduates from Rocketship elementary schools, noting that they had good basic skills but were struggling with the critical analysis required in middle schools (Herold 2014; Guha et al. 2015).

Does Computer-Aided Instruction Help Students Learn?

There have been hundreds of studies of CAI programs over the past twenty-five years, and the results are decidedly mixed. A number of early syntheses concluded that there are positive average effects of educational software for reading and mathematics (Fletcher-Flinn and Gravatt 1995; Kulik 1994), but others did not (Kirkpatrick and Cuban 1998). Mark Dynarski and his colleagues (2007) conducted experimental evaluations of ten educational technology products that had been judged by an expert review panel to have the greatest potential for success. They find that only one of the ten has a positive effect on student learning, calling into question many earlier positive findings.

While recent meta-analyses attempt to bring coherence to the large body of existing research, no clear consensus has emerged. A careful review of these studies and the associated meta-analyses reveals an interesting pattern. First, studies that use an experimental design yield much smaller effects than those using quasi-experimental methods (Cheung and Slavin 2011, 2013; Ma et al. 2014; Steenbergen-Hu and Cooper 2013). Second, studies using standardized outcome measures as opposed to locally development assessments tailored specifically to the technology being studied exhibit considerably smaller impacts (Koedinger et al. 1997; Kulik and Fletcher 2015; Steenbergen-Hu and Cooper 2013). Finally, studies with smaller samples generally exhibit larger effect sizes (Cheung and Slavin 2011Cheung and Slavin 2013; Kulik and Fletcher 2015; Steenbergen-Hu and Cooper 2013). Taken together, the most rigorous studies (those with large samples, standardized outcome measures, and an experimental design) yield effect sizes around 0.10, which aligns more closely with the findings of Dynarski and his colleagues (2007).

However, these evaluations do suggest important lessons for developers and practitioners. First, substantial evidence points to the importance of implementation barriers. For example, researchers who studied Thinking Reader found that students used the program far less frequently than recommended and that, when they did use it, they spent less time per book than indicated by program guidelines (Drummond et al. 2011). Similarly, in a study of the Cognitive Tutor Geometry program, John Pane and his colleagues (2010) find that teachers had trouble implementing the program’s [End Page 264] instructional practices. For example, teachers reported difficulties in implementing the collaborative work that required students to articulate mathematical thinking, making strong connections between computer-based activities and classroom instruction, and maintaining the expected learning pace with many students who lacked prior math and reading skills.

Moreover, even successful programs took more than one year to show positive effects. Pane and his colleagues (2014) conducted a large, experimental evaluation of Cognitive Tutor Algebra I in over 140 schools, with 25,000 students, across the country. Although there were no treatment effects in year one, by the second year students in the treatment classrooms were scoring 0.20 standard deviations higher than their peers in control classes. Following a subset of the original Dynarski et al. (2007) sample whose teachers continued using the programs for a second year, Larissa Campuzano and her colleagues (2009) find a statistically significant positive effect size of 0.15 on student achievement for these students.

Second, the benefit of ITS depends on the context in which it is implemented, including the counterfactual instruction that students would receive in the absence of the technology. In a reanalysis of the Dynarski et al. (2007) study, Eric Taylor (2015) finds important heterogeneity in the effects across classrooms. He shows that the CAI/ITS programs had a positive impact on students in classrooms with less effective teachers and a negative impact on students in classrooms with more effective teachers, consistent with the fact that the new technology was intended, in part, to be a substitute for the classroom teacher. Although the average effect was indistinguishable from zero, the effects for some students were not. This result highlights the importance of considering not only the quality of the new technology but also the quality of the education for which it is substituting. Consistent with this dynamic, evaluations of CAI in developing countries, in settings with fewer resources and arguably less skilled teachers, often find positive effects. For example, in a large randomized policy evaluation conducted in India, Abhijit Banerjee and his colleagues (2005) find strong positive effects of computer-assistedmathematicspro-grams on math scores in high-poverty urban areas in Mumbai and Vadodara.

DISCUSSION AND CONCLUSIONS

The Coleman Report shone light on vast differences in achievement between poor and non-poor children and provided evidence that public schooling systems were doing little to close these achievement gaps. Nonetheless, public schools are the primary lever by which governments seek to affect children’s learning and create more equitable opportunities. Their lack of effect at the time of the report is not necessarily an indictment of their potential. Many forces work against schools’ ability to close achievement gaps, particularly between highly resourced groups and those that are not highly resourced. Policy choices and technological innovations can exacerbate the inequalities that already exist between families, but alternatively, they may mitigate or even overcome those forces. In this paper, we have highlighted the potential mechanisms by which new technologies may reduce or add to the existing gaps. As with all prior technologies, this potential depends not only on their innovative features but also on their implementation.

The combination of residential segregation and, to some extent, local control of schools can disadvantage schools serving children from lower-income families, reducing or even reversing the potential in these cases for a public education system to reduce gaps. Even within schools, more powerful families can advocate for their children to receive greater resources, such as more effective teachers, or additional supports. When students attend different schools, this potential is far greater, as higher-income families can pool resources to benefit just their own school, either through the tax system or even independently. If the teaching jobs are more appealing in these schools serving higher-resourced families, as they often are, these schools can recruit better teachers and school leaders—perhaps the most important of all education resources—even without additional dollars.

Unlike teachers, technologies have no preferences [End Page 265] for the schools in which they work. As such, technologies may reduce inequalities in resources across schools. The resources available on the Internet, for example, are equally available to all schools with the same Internet access, and Internet access costs the same for all schools in the same area, regardless of the student population served. Technologies can reduce differences in peer groups in other ways as well. Online courses, for example, can mix peers from schools across wide geographic areas. Even within schools, technologies can have equalizing effects across teachers, increasing the effectiveness of less effective teachers by substituting for their areas of weakness. Similarly, technologies that allow teachers to better differentiate instruction may help them reach students who are further from the average within their classroom, to the academic benefit of those students.

The effects of technologies on gaps, however, may not be all positive. If less capable teachers have difficulty making use of the benefits of new technologies—that is, if technologies and teaching skills are complementary—differences across classrooms could increase. These differences might add to gaps to the extent that less capable teachers are concentrated in schools with students from less-resourced families. Similarly, to be effective some technologies may require students to have either adult oversight or a set of prior skills that will help them make use of the opportunities the technologies offer. To the extent that students from less-resourced families have access to fewer supports or have lower prior skills, they may not be positioned to reap these benefits and inequalities could increase.

Given the potential for new technologies to both reduce and exacerbate inequalities, their actual effect is an empirical question. Based on the evidence presented here, new technologies such as virtual courses and ITS, as currently implemented, are making little headway in closing achievement gaps. With respect to virtual course–taking, uptake is somewhat lower among low-achieving and low-income students than among high-achieving and affluent students, and our new data provide mixed evidence on students’ performance in virtual versus face-to-face classes. Most importantly, virtual courses are not a sufficiently superior option that we would expect them to measurably close the achievement gap even if uptake among disadvantaged students were higher. Current evidence on intelligent tutoring systems is more internally valid and more sanguine: high-quality research finds positive (but modest) effects, and results seem to be more pronounced for students in lower-quality classrooms. ITS may be reducing gaps, but only to a small degree, owing to both its limited scale and its only modest effect on gaps when implemented.

These results point not to the uselessness of new technologies for closing achievement gaps, but to the importance of understanding how technology interacts with the school and home contexts. We leave our analysis of new technologies and achievement gaps with four conclusions.

First, technologies have the potential to overcome some of the strong forces in U.S. public education that lead to inequalities in resources across schools. In particular, new technologies can bring high-quality curriculum, instruction, and peers to schools that have difficulty recruiting these resources owing to residential segregation, educator preference, and differential ability to raise funds.

Second, technologies can be either substitutes for or complements to resources already available in the school. To the extent that technologies are substitutes, they are inherently equalizing. When they are complements, however, such as when their successful implementation requires skilled teachers or students with strong prior skills, they must be accompanied by additional resources if traditionally underserved populations are to benefit.

Third, the range of mechanisms that underlie the potential for individual technologies to close achievement gaps includes quality, efficiency, differentiation, flexibility, and motivation. If technologies bring materials of both higher and more equal quality to schools, they might reduce achievement gaps by reducing the differences in access to quality instruction. If new technologies reduce extraneous work for either teachers or students—such as by reducing paperwork for teachers or enabling students to access the materials they need for [End Page 266] their work more quickly—their efficiency can benefit students and, if these barriers were greater for some groups than others, reduce achievement gaps. If new technologies can better differentiate instruction to meet the needs of students whose performance is further from the mean, then they benefit those students who are not at the average. As far as closing achievement gaps, this differentiation may particularly help high-achieving students from low-income backgrounds who may not be the focus of instruction at schools serving differentially low-performing students. They may also benefit particularly low-achieving and high-achieving students across all schools.

New technologies allow for greater flexibility that could benefit students who are more likely to face shocks at home, such as from health or family issues. Technologies make it easier to access consistent material when children are ill and need to stay home or when families move and students need to switch schools. The flexibility afforded by new technologies may be particularly useful to families with resource constraints that affect their residential location or health, and this is another way in which they may help to reduce the achievement gap. Finally, new technologies can either motivate or demotivate students. If technologies can draw in otherwise disenfranchised students through the personalization of material to a student’s interest or through gaming technology, they can benefit poor students and reduce achievement gaps. Alternatively, however, if the technologies increase reliance on students’ internal motivation or require the oversight of adults, they may exacerbate achievement gaps.

Each of these mechanisms—quality, efficiency, differentiation, flexibility, and motivation—can play a role in the impact of new technologies on achievement gaps, though sometimes not always for the better.

Fourth, the benefit of new technologies in schools for closing achievement gaps may not rest primarily in the classroom. The infrastructure of schools depends on technologies. The process of recruiting and hiring educators has benefited from online applications and assessments. Predictive analytics that can identify students in need of further supports, in combination with greater communication and co-ordination technologies to link students in need with resources inside and outside of schools, have great potential to aid those students most in need. In considering the potential of new technologies to reduce achievement gaps, it would be a mistake to focus solely on computer-aided instruction, virtual courses, or other innovations that involved direct interaction with students.

The evidence to date suggests that technologies alone cannot eliminate the achievement gaps that the Coleman Report so clearly illuminated. Political pressures, uneven existing resources, and the dependence of even the most advanced new approaches on high-quality implementation point to the work needed to capitalize on the potential of these new technologies. However, their potential is growing, and with it their capacity to counter some of the forces that have led to unequal school quality across communities and kept public schools from being the lever that they could be to reduce achievement gaps and equalize opportunities.

Brian Jacob

Brian Jacob is professor at the University of Michigan.

Dan Berger

Dan Berger is a doctoral student at the University of Michigan.

Cassandra Hart

Cassandra Hart is associate professor at the University of California, Davis.

Susanna Loeb

Susanna Loeb is professor at Stanford University.

Direct correspondence to: Brian Jacob at bajacob@umich.edu, University of Michigan, Weill Hall, 735 S. State St. #5124, Ann Arbor, MI 48109; Dan Berger at djberger@umich.edu; Cassandra Hart at cmdhart@ucdavis.edu; and Susanna Loeb at sloeb@stanford.edu.

REFERENCES

Anderson, Terry. 2008. The Theory and Practice of Online Learning. Edmonton, Calif.: Athabasca University Press.
Angrist, Joshua D., and Victor Lavy. 1999. “Using Maimonides’ Rule to Estimate the Effect of Class Size on Scholastic Achievement.” Quarterly Journal of Economics 114(2): 533–75.
Bakia, Marianne, K. Anderson, Eryn Heying, Kaeli Keating, and Jessica Mislevy. 2011. “Implementing Online Learning Labs in Schools and Districts: Lesson’s from Miami-Dade’s First Year.” Menlo Park, Calif.: SRI International.
Banerjee, Abhijit, Shawn Cole, Esther Duflo, and Leigh Linden. 2005. “Remedying Education: Evidence from Two Randomized Experiments in India.” Working Paper 11904. Cambridge, Mass.: National Bureau of Economic Research.
Barker, Bruce. 1985. “Curricular Offerings in Small and Large High Schools: How Broad Is the Disparity?” Research in Rural Education 3(1): 35–38.
Bettinger, Eric, Lindsay Fox, Susanna Loeb, and Eric Taylor. 2015. “Changing Distributions: How Online College Classes Alter Student and Professor Performance.” Working Paper 15-10. Stanford, Calif.: Stanford University, Center for Education [End Page 267] Policy Analysis (October). Available at: http://cepa.stanford.edu/sites/default/files/WP15-10.pdf (accessed December 14, 2015).
Bowen, William G., Matthew M. Chingos, Kelly A. Lack, and Thomas I. Nygren. 2014. “Interactive Learning Online at Public Universities: Evidence from a Six-Campus Randomized Trial.” Journal of Policy Analysis and Management 33(1): 94–111.
Boyd, Donald, Hamilton Lankford, Susanna Loeb, Jonah Rockoff, and James Wyckoff. 2008. “The Narrowing Gap in New York City Teacher Qualifications and Its Implications for Student Achievement in High-Poverty Schools.” Journal of Policy Analysis and Management 27(4): 793–818.
Campuzano, Larissa, Mark Dynarski, Roberto Agodini, and Kristina Rall. 2009. “Effectiveness of Reading and Mathematics Software Products: Findings from Two Student Cohorts.” NCEE 2009-4041. Washington: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
Carter, Prudence L. 2016. “Educational Equality Is a Multifaceted Issue: Why We Must Understand the School’s Sociocultural Context for Student Achievement.” RSF: The Russell Sage Foundation Journal of the Social Sciences 2(5). doi: 10.7758/RSF.2016.2.5.07.
Cavalluzzo, Linda, Deborah L. Lowther, Christine Mokher, and Xitao Fan. 2012. “Effects of the Kentucky Virtual Schools’ Hybrid Program for Algebra I on Grade 9 Student Math Achievement: Final Report.” NCEE 2012-4020. Washington: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
Cavanaugh, Cathy, Kathy Jo Gillan, Jeff Kromrey, Melinda Hess, and Robert Blomeyer. 2004. “The Effects of Distance Education on K–12 Student Outcomes: A Meta-analysis.” Naperville, Ill.: Learning Point Associates/North Central Regional Educational Laboratory (NCREL).
Center for Research on Educational Outcomes (CREDO). 2011. Charter Schools’ Performance in Pennsylvania. Stanford, Calif.: Stanford University, CREDO.
Cheung, Alan C. K., and Robert E. Slavin. 2011. “The Effectiveness of Education Technology for Enhancing Reading Achievement: A Meta-analysis.” Baltimore: Center for Research and Reform in Education.
———. 2013. “The Effectiveness of Educational Technology Applications for Enhancing Mathematics Achievement in K–12 Classrooms: A Meta-analysis.” Educational Research Review 9(1): 88–113.
Christensen, Clayton M., Curtis W. Johnson, and Michael B. Horn. 2010. Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns. Expanded ed. New York: McGraw-Hill.
Clotfelter, Charles T., Helen F. Ladd, and Jacob L. Vigdor. 2007. “Teacher Credentials and Student Achievement: Longitudinal Analysis with Student Fixed Effects.” Economics of Education Review 26(6): 673–82. doi: http://dx.doi.org/10.1016/j.econedurev.2007.10.002.
Coleman, James S., Ernest Q. Campbell, Carol J. Hobson, James McPartland, Alexander M. Mood, Frederick D. Weinfeld, and Robert L. York. 1966. Equality of Educational Opportunity. Washington: U.S. Department of Health, Education, and Welfare, Office of Education.
Conati, Cristina. 2009. “Intelligent Tutoring Systems: New Challenges and Directions.” In Twenty-First International Joint Conference on Artificial Intelligence 9(1): 2–7.
Cooper, Harris, Kelly Charlton, Jeff C. Valentine, Laura Muhlenbruck, and Geoffrey D. Borman. 2000. “Making the Most of Summer School: A Meta-analytic and Narrative Review.” Monographs of the Society for Research in Child Development 65(1): i–127.
Cuban, Larry. 2003. Oversold and Underused: Reforming Schools Through Technology, 1980–2000. Cambridge, Mass.: Harvard University Press.
Dede, Christopher. 2006. Online Professional Development for Teachers: Emerging Models and Methods. Cambridge, Mass.: Harvard Education Press.
Dettling, Lisa J., Sarena F. Goodman, and Jonathan Smith. 2015. “Every Little Bit Counts: The Impact of High-Speed Internet on the Transition to College.” Finance and Economics Discussion Series 2015-108. Washington, D.C.: Board of Governors of the Federal Reserve System. doi: http://dx.doi.org/10.17016/FEDS.2015.108.
Drummond, Kathryn, Marjorie Chinen, Teresa Garcia Duncan, H. Ray Miller, Lindsay Fryer, Courtney Zmach, and Katherine Culp. 2011. “Impact of the Thinking Reader Software Program on Grade 6 Reading Vocabulary, Comprehension, Strategies, and Motivation.” NCEE 2010-4035. Washington: U.S. Department of Education, Institute of Education [End Page 268] Sciences, National Center for Education Evaluation and Regional Assistance.
Dynarski, Mark, Roberto Agodini, Sheila Heaviside, Timothy Novak, Nancy Carey, Larissa Campuzano, Barbara Means, Robert Murphy, William Penuel, Hal Javitz, Deborah Emery, and Willow Sussex. 2007. “Effectiveness of Reading and Mathematics Software Products: Findings from the First Student Cohort.” NCEE 2007-4005. Washington: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
Figlio, David N., Mark Rush, and Lu Yin. 2013. “Is It Live or Is It Internet? Experimental Estimates of the Effects of Online Instruction on Student Learning.” Journal of Labor Economics 31(4): 763–84.
Fletcher-Flinn, Claire M., and Breon Gravatt. 1995. “The Efficacy of Computer Assisted Instruction (CAI): A Meta-analysis.” Journal of Educational Computing Research 12(3): 219–41.
Florida Bureau of Educational Technology. 2014. “Bureau of Educational Technology Archives: Fall 2014 Technology Resources Inventory.” Available at: http://www.fldoe.org/about-us/division-of-technology-info-services/educational-technology/archives (accessed January 3, 2016).
Florida Department of Management Services (FLDMS). 2014. Official update submission to the National Telecommunications and Information Administration under the State Broadband Initiative Program for the State of Florida (October). Available at National Broadband Map, http://www.broadbandmap.gov/data-download (accessed June 28, 2016).
Graesser, Arthur C., Mark W. Conley, and Andrew Olney. 2012. “Intelligent Tutoring Systems.” In APA Educational Psychology Handbook, vol. 3, Application to Learning and Teaching, edited by Karen R. Harris, Steve Graham, and Tim Urdan. Washington, D.C.: American Psychological Association.
Gray, Lucinda, Nina Thomas, Laurie Lewis, and Peter Tice. 2010. “Teachers’ Use of Educational Technology in U.S. Public Schools: 2009.” NCES 2010-040. Washington: U.S. Department of Education, National Center for Education Statistics.
Guha, Roneeta, Naomi Tyler, Samantha Astudillo, and Betsey Wolf. 2015. “Evaluation of Rocketship Students’ Middle School Outcomes: First-Year Findings.” Menlo Park, Calif.: SRI International.
Harris, Douglas N., and Tim R. Sass. 2011. “Teacher Training, Teacher Quality, and Student Achievement.” Journal of Public Economics 95(7–8): 798–812. doi: http://dx.doi.org/10.1016/j.jpubeco.2010.11.009.
Hart, Cassandra, Elizabeth Friedmann, and Michael Hill. 2014. “Online Course-Taking and Student Outcomes in California Community Colleges.” Unpublished paper, University of California, Davis.
Heppen, Jessica B., Kirk Walters, Margaret Clements, Ann-Marie Faria, Cheryl Tobey, Nicholas Sorensen, and Katherine Culp. 2012. “Access to Algebra I: The Effects of Online Mathematics for Grade 8 Students.” NCEE 2012-4021. Washington: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
Herold, Benjamin. 2014. “New Model Underscores Rocketship’s Growing Pains.” Education Week 33(19): 26.
Hoxby, Caroline M. 2000. “The Effects of Class Size on Student Achievement: New Evidence from Population Variation.” Quarterly Journal of Economics 115(4): 1239–85.
Jennings, Jennifer L., and Douglas Lee Lauen. 2016. “Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning.” RSF: The Russell Sage Foundation Journal of the Social Sciences 2(5). doi: 10.7758/RSF.2016.2.5.11.
Jepsen, Christopher, and Steven Rivkin. 2009. “Class Size Reduction and Student Achievement: The Potential Trade-off Between Teacher Quality and Class Size.” Journal of Human Resources 44(1): 223–50.
Joyce, Ted J., Sean Crockett, David A. Jaeger, Onur Altindag, and Stephen D. O’Connell. 2015. “Does Classroom Time Matter?” Economics of Education Review 46: 64–77.
Kalogrides, Demetra, and Susanna Loeb. 2013. “Different Teachers, Different Peers: The Magnitude of Student Sorting Within Schools.” Educational Researcher 42(6): 304–16.
Kirkpatrick, Heather, and Larry Cuban. 1998. “Computers Make Kids Smarter—Right?” Technos 7(2): 26–31.
Koedinger, Kenneth R., John R. Anderson, William H. Hadley, and Mary A. Mark. 1997. “Intelligent Tutoring Goes to School in the Big City.” International Journal of Artificial Intelligence in Education 8(1): 30–43. [End Page 269]
Krueger, Alan B. 1999. “Experimental Estimates of Education Production Functions.” Quarterly Journal of Economics 114(2): 497–532.
Kulik, James A. 1994. “Meta-analytic Studies of Findings on Computer-Based Instruction.” In Technology Assessment in Education and Training, edited by Eva L. Baker and Harold F. O’Neil Jr. Hillsdale, N.J.: Lawrence Erlbaum Associates.
Kulik, James A., and J. D. Fletcher. 2015. “Effectiveness of Intelligent Tutoring Systems: A Meta-analytic Review.” Review of Educational Research (April 17). doi:10.3102/0034654315581420.
Ma, Wenting, Olusola O. Adesope, John C. Nesbit, and Qing Liu. 2014. “Intelligent Tutoring Systems and Learning Outcomes: A Meta-analysis.” Journal of Educational Psychology 106(4): 901–18.
Means, Barbara, Yukie Toyama, Robert Murphy, Marianne Bakia, and Karla Jones. 2010. “Evaluation of Evidence-Based Practices in Online Learning: A Meta-analysis and Review of Online Learning Studies.” Washington: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development.
Molnar, Alex (ed.), Gary Miron, Luis Huerta, Jennifer King Rice, Larry Cuban, Brian Horvitz, Charisse Gulosino, and Sheryl Rankin Shafer. 2013. “Virtual Schools in the U.S. 2013: Politics, Performance, and Research Evidence.” Boulder, Colo.: National Education Policy Center.
Nwana, Hyacinth S. 1990. “Intelligent Tutoring Systems: An Overview.” Artificial Intelligence Review 4(4): 251–77.
Nye, Barbara, Spyros Konstantopoulos, and Larry Hedges. 2004. “The Effects of Small Classes on Academic Achievement: The Results of the Tennessee Class Size Experiment.” American Educational Research Journal 37(1): 123–51.
Pane, John F., Beth Ann Griffin, Daniel F. McCaffrey, and Rita Karam. 2014. “Effectiveness of Cognitive Tutor Algebra I at Scale.” Educational Evaluation and Policy Analysis 36(2): 127–44.
Pane, John F., Daniel F. McCaffrey, Mary Ellen Slaughter, Jennifer L. Steele, and Gina S. Ikemoto. 2010. “An Experiment to Evaluate the Efficacy of Cognitive Tutor Geometry.” Journal of Research on Educational Effectiveness 3(3): 254–81.
Picciano, Anthony G., Jeff Seaman, Peter Shea, and Karen Swan. 2012. “Examining the Extent and Nature of Online Learning in American K–12 Education: The Research Initiatives of the Alfred P. Sloan Foundation.” The Internet and Higher Education 15(2): 127–35.
Pufahl, Ingrid, and Nancy C. Rhodes. 2011. “Foreign Language Instruction in U.S. Schools: Results of a National Survey of Elementary and Secondary Schools.” Foreign Language Annals 44(2): 258–88.
Reardon, Sean F., and Ann Owens. 2014. “60 Years After Brown: Trends and Consequences of School Segregation.” Annual Review of Sociology 40: 199–218.
Ritter, Gary, and Martin F. Lueken. 2013.“Value-Added in a Virtual Learning Environment: An Evaluation of the Arkansas Virtual Academy.” Paper prepared for the Thirty-Eighth Annual Conference of the Association for Education Finance and Policy. New Orleans (March 14–16).
Rivkin, Steven, Eric Hanushek, and John F. Kain. 2005. “Teachers, Schools, and Academic Achievement.” Econometrica 73(2): 417–58.
Ruggles, Steven, Katie Genadek, Ronald Goeken, Josiah Grover, and Matthew Sobek. 2015. “Integrated Public Use Microdata Series: Version 6.0” [machine-readable database]. Minneapolis: University of Minnesota.
Sass, Tim R., Jane Hannaway, Zeyu Xu, David Figlio, and Li Feng. 2012. “Value Added of Teachers in High-Poverty Schools and Lower-Poverty Schools.” Journal of Urban Economics 72(2): 104–22.
Schneider, Barbara, and Guan Saw. 2016. “Racial and Ethnic Gaps in Postsecondary Aspirations and Enrollment.” RSF: The Russell Sage Foundation Journal of the Social Sciences 2(5). doi: 10.7758/RSF.2016.2.5.04.
Schorr, Jonathan, and Deborah McGriff. 2011. “Future Schools: Blending Face-to-Face and Online Learning.” Education Next 11(3). http://uconnhealth2020.uchc.edu/knowledgebase/pdfs/education/future_schools.pdf (accessed September 14, 2015).
Shute, Valerie J., and Joseph Psotka. 1994. “Intelligent Tutoring Systems: Past, Present, and Future.” Report 94-17332. Brooks Air Force Base, TX: Armstrong Laboratory, Human Resources Directorate.
Steenbergen-Hu, Saiying, and Harris Cooper. 2013. “A Meta-analysis of the Effectiveness of Intelligent Tutoring Systems on K–12 Students’ Mathematical [End Page 270] Learning.” Journal of Educational Psychology 105(4): 970–87.
Streich, Francine E. 2014. Online Education in Community Colleges: Access, School Success, and Labor-Market Outcomes. PhD diss., University of Michigan.
Taylor, Eric 2015. “New Technology and Teacher Productivity.” Working paper. Cambridge, Mass.: Harvard Graduate School of Education.
U.S. Department of Commerce (USDOC). National Telecommunications and Information Administration (NTIA). 2014. “State Broadband Initiative.” Washington: USDOC (June 30).
VanLehn, Kurt, Collin Lynch, Kay Schulze, Joel A. Shapiro, Robert Shelby, Linwood Taylor, Don Treacy, Anders Weinstein, and Mary Wintersgill. 2005. “The Andes Physics Tutoring System: Lessons Learned.” International Journal of Artificial Intelligence in Education 15(3): 147–204.
Watson, John, and Butch Gemin. 2008. “Using Online Learning for At-Risk Students and Credit Recovery.” Vienna, Va.: North American Council for Online Learning.
Watson, John, Amy Murin, Lauren Vashaw, Butch Gemin, and Chris Rapp. 2012. “Keeping Pace with K–12 Online Learning: An Annual Review of Policy and Practice 2011.” Durango, Colo.: Evergreen Education Group.
Wicks, Matthew. 2010. “A National Primer on K–12 Online Learning: Version 2.” Vienna, Va.: International Association for K–12 Online Learning.
Xu, Di, and Shanna Jaggars. 2011. “The Effectiveness of Distance Education Across Virginia’s Community Colleges: Evidence from Introductory College-Level Math and English Courses.” Educational Evaluation and Policy Analysis 33(3): 360–77.
———. 2013. “The Impact of Online Learning on Students’ Course Outcomes: Evidence from a Large Community and Technical College System.” Economics of Education Review 3: 46–57. [End Page 271]

Thanks to our research partners at the Florida Virtual Schools, the Florida Department of Education, and Miami–Dade County Public Schools. Funding for this research was provided by the Walton Family Foundation, the Spencer Foundation, and the Institute of Education Sciences (grant R305A150163). Results, information, and opinions are the authors’ and do not reflect the views or positions of any funding agency or research partner.

Footnotes

1. Authors’ calculations using National Assessment of Educational Progress (NAEP) Data Explorer.

2. Similarly, the Internet has enhanced the ability of non-experts, including classroom teachers, to create and upload their own videos.

3. The evolution of touch-screen technology on smart phones and tablets has enabled very young children to engage in technology-aided instruction. Prior to tablets, it was difficult for preschool, kindergarten, and even early primary grade students to work with educational software, which required the use of a mouse or keyboard. Now there are hundreds of applications that expose children to early literacy and numeracy skills without the need to manipulate a keyboard or mouse.

4. The International Association for K–12 Online Learning (iNACOL) defines online learning as teacher-led education that takes place over the Internet with teacher and student separated by geography.

5. Lectures in Figlio, Rush, and Yin (2013) were delivered fully online, but those students had access to FtF time with instructors during traditional office hours as well.

6. Several recent studies that focus on full-time virtual schools serving K–12 students find mixed results (see Center for Research on Educational Outcomes 2011; Molnar et al. 2013; Ritter and Lueken 2013).

7. 2013 Florida Statutes, Title XLVIII (K–20 Education Code), Chapter 1002.45 (virtual instruction programs).

8. These calls also help FLVS identify instances of student cheating—for example, if the level of student understanding revealed during a phone call does not match that student’s performance on written assignments.

9. We drop a small number of observations (fewer than 5 percent) where students attended special education schools, alternative schools, career or vocational education schools, or schools run by the Department of Juvenile Justice.

10. For an example of the survey layout and responses for a single school, see: http://www.flinnovates.org/survey/FlinnovatesInventory/Reports/SchoolsPublicRpt?schoolCode=05%203011&inventoryTypeId=2 (accessed June 28, 2016).

11. Standards include 1GHz or faster processor; 1GB RAM or greater memory; 1024-by-768 screen resolution; and 9.5-inch (10-inch class) or larger screen size measured diagonally. Windows computers must use Windows 7 or higher; Apple computers must use MAC OS X 10.7 or higher.

12. This seemingly high rate is probably due to the requirement that, as mentioned earlier, all Florida high school students must take a virtual course as of the cohort that entered ninth grade in 2011–2012 (Watson et al. 2012).

13. “Other attempts” include both classes that could be taken multiple times for credit (like some special education courses) and cases where students took one term of a class in one year and a second term in a subsequent year.

14. Although relatively few high-achieving students took classes for credit recovery or grade improvement purposes, each category still had several thousand enrollments: we observe about 2,250 grade improvement attempts and about 7,500 credit recovery attempts for the highest-achieving students. This suggests that the high numbers are not purely an artifact of unstable measures due to small sample sizes.

15. This is on a very small base of 200 to 450 enrollments each for AP credit recovery and grade improvement attempts.

16. The coefficient on rural is negative even when we do not simultaneously control for enrollment.

17. For a more detailed discussion, see Ma et al. (2014).

Share