University of Illinois Press
Abstract

The purpose of this study is to isolate the independent effects of high school facility quality on student achievement using a large, nationally representative U.S. database of student achievement and school facility quality. Prior research on linking school facility quality to student achievement has been mixed. Studies that relate overall independently rated structural and engineering aspects of schools have been shown to not be related to achievement. However, more recent research has suggested that facility maintenance and disrepair, rather than structural issues, may be more directly related to student achievement. If there is a relationship, addressing facility disrepair from the school, district, or state level could provide a potential avenue for policymakers for school improvement. We analyzed the public school component and the facilities checklist of the ELS:2002 survey (8,110 students in 520 schools) using a two-level hierarchical linear model to estimate the independent effect of facility disrepair on student growth in mathematics during the final two years of high school controlling for multiple covariates at the student and school level. We found no evidence of a direct effect of facility disrepair on student mathematics achievement and instead propose a mediated effects model.

Introduction

In the U.S., PK-12 school districts annually spend approximately $37 billion on capital expenditures related to construction and renovation of school facilities and $48 billion on facility maintenance and operations (NCEF 2010; Hill and Johnson 2005). [End Page 72] A perennial question posed in the literature on school facilities over the past 30 years is the extent to which the quality of school facilities may influence student achievement (Picus et al. 2005; Uline and Tschannen-Moran 2008; Roberts 2009; Earthman 2000). Much of this research has shown little to no effect of direct measures of facility quality on student learning (Picus et al. 2005). However, this research has continually been critiqued as problematic due to four main methodological issues. First, much of the research on facility quality and student achievement has depended on surveys of school principals opinions of the quality of their schools. This is problematic given that principals are not impartial observers of their facilities, and rarely have the expertise to compare the quality of their school to others. Second, the majority of the research has depended on descriptive statistics and correlations of facility quality and student achievement test scores without controlling for the known covariates of both variables, such as socioeconomic status (SES) of the students, as well as a lack of control for the nested effects of students within schools. Third, except for a few isolated statewide studies, the majority of the research has depended on small intact samples, or samples of convenience, hampering efforts to generalize findings to the majority of schools. Fourth, of the studies that have independently rated school facilities, these raters either use the depreciated costs of building construction or age to estimate the quality of the facility or use in-depth engineering checklists to rate the age and quality of all aspects of a facility, from the age and quality of the boiler, to windows, ventilation, foundation, etc.

Recently, Roberts (2009) has proposed that a facility quality effect on student achievement may not be evident from these types of engineering checklists because the impact of a school's infrastructure may not directly influence the daily work of teaching and instruction. Rather, facility maintenance may influence instruction and student learning through providing a safe and clean environment for students and teachers. Thus, we hypothesize that a facility effect may be a maintenance effect, in which the cleanliness and general state of repair of the school can positively influence teaching and learning. If this hypothesis is supported, maintenance is a malleable factor in schools that is under the influence of the school administration. Directing efforts to increase the quality of facility maintenance could provide an attractive and actionable avenue for school improvement. However, to date, no studies have isolated the independent effects of school facility maintenance on student achievement using a large nationally representative sample.

The purpose of the present study is to isolate the independent effects of school facility quality on student achievement using a large, nationally representative U.S. database of student achievement and school facility quality—namely the Education Longitudinal Survey (ELS) of 2002. We employed a two-level [End Page 73] hierarchical linear model, nesting students within schools and then estimated the direct effects of facility maintenance and disrepair on longitudinal student achievement in mathematics during the last two years of high school. Through this study, we aimed to address many of the methodological issues of past research that have focused on the question of the relationship between school facility condition and student achievement. Using more sophisticated controls and estimation procedures, we replicate and extend the findings of Picus et al. (2005) to a large nationally representative sample by finding no evidence for a direct effect of facility maintenance and repair on either overall student mathematics achievement or growth in achievement over the final two years of high school. We conclude with a discussion of the implications for this research domain, and with recommendations for research to turn next from estimating direct effects to estimating mediated effects of facility condition on the academic and professional climate of a school, which then may directly influence student achievement.

Review of the Literature:

A major deficiency in school facilities research has been the lack of replication of sound studies... Researchers differ in their opinions [on the influence of facilities on achievement]. Some claim that building influences are very insignificant and that if there is any influence, it results simply from chance; others claim that the built environment has a discernible influence upon the processes of teaching and learning, either inhibiting or helping them. More systematic analysis on a large scale is required before generalizations can be made, particularly since the issue evinces broad interest.

To date, the majority of the research on school facilities and student achievement has focused on three main topics. First, since facility funding is primarily a local or state taxpayer issue (Duncombe and Wang 2009), a long history of research has detailed school facility funding and construction in the U.S. This research has focused on how school facilities are funded across each state (Sielke et al. 2001), competition between school districts for facility construction (Militello, Metzger, and Bowers 2008; Arsen and Davis 2006), the best strategies to fund new construction and renovation of schools (Bowers, Metzger, and Militello 2010a, 2010b; Harris and Munley 2002; Muir and Schneider 1999; Johnson and Ingle 2009; Ingle, Johnson, and Petroff in press), and the pervasive problems with inadequate and unequal funding of school facilities across different locales (NCES 2000; Arsen and Davis 2006; Sielke 2001; GAO 1995).

The second main topic in the school facilities research is one in which many facilities researchers have interviewed and surveyed teachers and principals to [End Page 74] gauge their perception of their school facilities and then have linked attitudes towards the quality of the facilities to teacher motivation, morale, and student achievement. This research has generally found that when building occupants view their facilities favorably, they are more likely to perceive that the school has a positive learning culture and community, and is a more welcoming and inviting location for students, parents, and the community (Lowe 1990; Hawkins and Overbaugh 1988; Maxwell 2000; Schneider 2003; Earthman and Lemasters 2009). In addition, studies have found positive relationships between perception of facility quality and teacher retention (Buckley, Schneider, and Shang 2004) and teacher motivation (Fuller et al. 2009). Recent work has also demonstrated a significant positive relationship between perception of facility quality and student achievement (Uline and Tschannen-Moran 2008; Uline, Tschannen-Moran, and Wosley 2009). In their work, Uline and Tschannen-Moran (2008, 2009) showed that for teachers from a selection of Virginia middle schools, controlling for student SES, teacher perception of the quality of their school facilities positively associated with student achievement in English and mathematics. However, this type of research in which building occupants are surveyed as to their perception of facility quality has been critiqued based on the point that perception of facility quality is a step removed from actual facility quality (Picus et al. 2005). Thus, if there is a relationship between facilities and student achievement, teacher surveys do little to help identify exactly what school leaders can do to increase facility quality rather than perception of quality.

Accordingly, the third main topic of the research surrounding school facilities and student achievement has focused on actual measurements of facility attributes and quality, while also assessing the direct effects of facility quality on student achievement—an engineering or structural perspective. Much of this literature has focused on two broad domains, the effects of individual design features and engineering ratings of overall facility quality. In a recent extensive review of the literature, Woolner and colleagues (2007) noted the extensive architectural literature on school facility design features include lighting, occupant movement and circulation, heating, air conditioning and air quality, view distance, color palettes, and the size of classrooms among others (Woolner et al. 2007). Overall, the results of these studies demonstrated that teachers and students require a certain adequate level of lighting, ventilation, temperature control, acoustics and air quality (Earthman 2000). However, evidence of substantive effects of specific attributes on student achievement beyond basic requirements is weak.

Linked to this engineering or structural perspective, another strand of facilities research has focused on linking overall ratings of facility quality with student achievement, which attempts to measure the direct effect of facility condition on student achievement. However, assessments of facility quality and [End Page 75] maintenance are difficult to come by. Some studies have used building age as a proxy for facility quality (McGuffey and Brown 1978; Berner 1993; O'Neill and Oates 2001); however, subsequent research has demonstrated that years since construction is a poor measure of overall facility quality since the current condition of the building is not represented (Picus et al. 2005; Schneider 2002). A different approach has been to provide school principals with a consistent facilities checklist to rate the quality of their buildings. One checklist is the Commonwealth Assessment of Physical Environment (CAPE) that allows principals to rate buildings on structural conditions (age, condition of windows, heating, air conditioning, and roofing), as well as cosmetic conditions (paint, cleanliness, and graffiti) so as to designate a school as substandard, standard or above standard (Cash 1993). Although it is a subjective measure based on principal perception, CAPE provides a means to survey principals' perceptions of specific facility and engineering-related issues across their school and then provides a single rating. These ratings have been shown to be positively related to student achievement (Hines 1996; Earthman 2000). However, this past work centered on a single rating representing a principals' evaluation of all building features has been heavy critiqued by Picus et al. (2005).

In support of the engineering/structural perspective, Picus et al. (2005) argued that prior literature relating school facilities to achievement was highly problematic based on a variety of issues and thus had demonstrated little to no relationship to date. They identified five main issues with the past literature: (1) overall measures, (2) lack of data availability, (3) subjective evaluators, (4) district aggregates, and (5) a focus on descriptive rather than comparative or inferential statistics. First, facility-wide overall measures that create a single building condition score that summarize both the structural as well as the cosmetic/ maintenance/disrepair aspects of a building make it difficult to identify what feature may or may not influence student achievement. Facility age was one of the first of these overall measures of building condition to be used as a proxy for both the structural as well as maintenance aspects of a school. In an early study relating building age to achievement, McGuffey and Brown (1978), using the district as the unit of analysis, found that the age of the building explained three percent of the variance of achievement as measured by the Iowa Test of Basic Skills (ITBS) in 4th and 8th grade and the Test of Academic Progress in 11th grade (McGuffey and Brown 1978). However, as Picus et al. (2005) discussed, the problem with building age as a proxy for facility condition is that it does not account for the frequency of maintenance and building lifespan, so any relationship with achievement is difficult to interpret. While an improvement over the age of a building, facility assessment instruments such as CAPE are more descriptive (Cash 1993; Earthman 2000) but still create a summary measure, [End Page 76] rather than disaggregating facility condition across multiple domains, such as age, structural/engineering issues, and maintenance and disrepair.

The second main issue identified in the literature by Picus et al. (2005) was the lack of control variables in the majority of studies that claimed to find an effect of facilities on achievement. Historically, many student and school characteristics that are known to co-vary with achievement have been left out of analyses because researchers have lacked access to these types of data. Indeed, many of these variables, such as school size, class size, the percent of free lunch students in a school and student SES, family structure, gender, and ethnicity are usually not included. Third, Picus et al. (2005) noted that principal surveys of their facilities lacked objectivity, even with surveys such as CAPE. According to the authors, principals are mainly qualified to assess how the condition of the building functions within the educational environment not the cost or repair burden on property management. Picus et al. (2005) state that only non-partial third-party evaluators should be used to rate school facilities. Fourth, they pointed out the difficulty with district-wide aggregates. District aggregates do not account for the differences between how a student responds to a specific school environment. Finally, they also noted that previous work employed statistical methods that were limited mostly to descriptive statistics with little work done to control for covariates or to analyze generalizable statistical models.

Citing these previous issues, Picus et al. (2005) analyzed the relationship between school facility quality and student achievement using a statewide sample with state achievement test data spanning over three years aggregated to the school level. In the study, they compared building condition scores for every school building in the state of Wyoming from independent engineer ratings to school-level WYCAS scores—the Wyoming student achievement test. They used correlation and multiple regression to control for school-level percentage of free and reduced-price lunch students (school SES). The engineering checklist included an assessment of 22 different building attributes, including rating the foundations, ceilings, and floors. Ratings were then combined into a single building condition score for use in the analysis. According to Picus et al. (2005), their study addressed many of the issues from the previous literature by (a) independently rating each facility, (b) controlling for SES using inferential statistics, (c) using an engineering checklist to rate facility quality, and (d) comparing to a state standardized assessment. Thus, to date, the Picus et al. (2005) study is one of the most thorough studies assessing the relationship between facility condition and student achievement. However, the authors showed that they could find no evidence to support such a relationship between facility condition and student achievement when controlling for SES. They state that "the results of these analyses clearly indicate that there is essentially no [End Page 77] relationship between building condition... and student achievement... meaning higher quality buildings are unrelated to higher levels of student academic achievement" (Picus et al. 2005).

Although Picus and colleagues made very definitive pronouncements about the extent of a relationship between facility condition and student achievement (or lack thereof), there were four main issues with the study that make these conclusions problematic. First, while the study was one of the first to control for school SES, the authors failed to account for the nested nature of achievement data, which has been well documented in the multi-level modeling literature—students are nested within schools and this makes the data dependent, violating the assumption of independence in multiple regression (Hox 2002; Kennedy and Mandeville 2000; Raudenbush and Bryk 2002). Thus, if the study's focus is to estimate the effects of school-level conditions on individual students, then this nested nature of the data must be accounted for to accurately model the data and estimate effects. Second, a related issue is that school-level aggregates of achievement and student-level variables are highly problematic, especially given the implied outcome. As an example, for each of the studies of school facility condition on student achievement, the implication is that some set of school-level facility conditions influence student-level achievement. Attempting to estimate this effect without using student-level data or controlling for student-level covariates and aggregating all data to the school-level, ignores the complex nature of the data and does not estimate the coefficients and standard errors appropriately. This issue with the use of aggregates has been shown in the past to lead to inappropriately assessing each parameter's significance (Hox 2002; Raudenbush and Bryk 2002) and to falsely rejecting or failing to reject a hypothesis. Third, while the raters used to assess building condition were independent in the Picus et al. study (2005), the engineering checklist, much like CAPE, aggregates both the structural as well as the maintenance conditions of the facility into a single facility condition score. In a response to Picus, Roberts (2009) stated the issue with this structural or engineering condition score was:

The 'single building' school facility assessment scores used by Picus et al. (2005) are similar to the engineering convention. Such a measurement choice appears unproblematic for property management purposes, but it is much less clear why such facility measures should bear a relationship to educational outcomes. Why, for example, should a global measure that includes the condition of boilers, roofs, ducts, and foundations have any systematic relationship to educational outcomes?

Thus, in opposition to Picus et al. (2005), Roberts (2009) argues that the engineering perspective that includes ratings of the facility structure, such as the foundation and the boiler, is not appropriate when assessing what effect [End Page 78] the condition of a facility may have on student achievement. Indeed, Roberts (2009) found that the administrator's rating of the buildings was not related to the engineer's evaluation and that only the principal's assessment correlated with a survey of the learning environment. Roberts' main argument suggests that the measure of facility quality when compared to student outcomes should represent only building features relevant and visible by those within the learning community.

Despite Roberts' (2009) results indicating the possible value of the principal's perspective, the engineering survey utilized in his study was a cost-based analysis, deferred maintenance costs in portion with total replacement costs, similar to the measure used in Picus et al. (2005), which includes a representation of structural features. These structurally weighted condition scores do not account for the same type of building conditions as the principal evaluations. Thus, to date the direct relationship between school facility quality as a matter of visible maintenance and disrepair assessed by a third party and student achievement has yet to be fully analyzed.

Framework of the Study:

The present study attempts to address these multiple issues with the past research on facility condition and estimate the independent effects of facility maintenance on student achievement in four ways. First, to address the issue of the over-use of small and intact samples, we used a large nationally representative dataset that contained background data and standardized test scores at the student level and descriptive variables at the school level, the Education Longitudinal Survey (ELS) of 2002. Second, in response to the issue of subjective surveys of teacher or principal perception of school condition, we used the facility survey component of the ELS:2002 dataset in which independent raters visited each school and rated the facilities on a multi-item checklist. Third, as Roberts (2009) acknowledges, structural or engineering assessments of schools may not have much to do with student achievement, consequently we hypothesized that instead, current facility maintenance and disrepair may influence student achievement, rather than a focus on structural issues. Fourth, to estimate the direct effects of school-level variables on student-level achievement including facility maintenance and disrepair, we used a two-level hierarchical linear model to appropriately control for the nested nature of students within schools. Thus, the research question for this study was, "to what extent does school facility disrepair directly affect student achievement while controlling for both student-level and school-level achievement covariates?" [End Page 79]

Methods

Sample

This study is a secondary analysis of the restricted-access Education Longitudinal Study of 2002 (ELS:2002) and first follow-up (F1). Originally, collected by the National Center for Education Statistics (NCES), ELS:2002 is a longitudinal nationally representative probability sample of about 15,400 U.S. high school students who were in grade 10 in the spring of 2002 (Ingles et al. 2004, 2007). In the 2002 base year (BY), students were tested in mathematics—these students and their schools were surveyed on a variety of issues. In the 2004 first follow-up, when the students would have been in grade 12, the students were tested again in mathematics. In addition, independent facility raters were sent to each high school in 2002 and rated the quality of the facility maintenance on a 60-item survey (Planty et al. 2006). In the present study, we analyzed a subset of the ELS:2002 dataset, focusing on students in public schools with complete data on each of the variables analyzed—resulting in n=8,110 students in n=520 schools. For confidentiality reasons, sample sizes are rounded to the nearest ten.

Facility Maintenance and Disrepair

The ELS:2002 dataset provides a unique opportunity to examine the school-level effects of facility maintenance and disrepair on longitudinal student performance in mathematics. Mathematics standardized test-score performance was selected as the dependent variable in the analysis since it was assessed in both 2002 and 2004. To assess the relationship of facility maintenance and disrepair on student achievement, we selected a subset of items from the school facility checklist component of the base year ELS:2002 dataset that had to do with facility maintenance. A copy of the full facility checklist survey used by the facility raters can be found online through NCES (NCES 2002; Planty et al. 2006). Facility raters gauged each item as either yes or no. As an example, raters were first instructed to:

Standing at the main entrance into the school, observe the school's front hallway(s) during a time when most students are in class (i.e., a class period). Take as much time as necessary to observe the hallway(s). For each item listed, indicate whether you observed it or not; Yes observed, No did not observe.

Or:

During a change in classes or other time when classrooms are not in session, enter one classroom in which high school students are taught. For each item listed, indicate whether you observed it in the classroom.

(NCES 2002). [End Page 80]

The raters then marked yes or no for questions such as "trash on floors, trash overflowing from trashcans, and broken lights." The 18 items selected from the full facility survey used in the present study are listed in Table 1 by the order of the ELS:2002 variable labels (Table 1). All items that had to do with negative maintenance conditions were included in the analysis. Table 1 lists each item by the variable name, the ELS:2002 variable labels and the percentage of schools that were rated with a "yes" for each item. Together these items had a reliability of a Cronbach's alpha of 0.729, indicating that these 18 items together are measuring a similar construct that we termed facility disrepair.

To construct a facility disrepair composite variable for use in the subsequent analysis, we summed these 18 ratings across the 520 schools, using a 0 for "no did not observe" and 1 for "yes observed." However, the distribution was highly positively skewed, with 64% of the schools having no indication of facility disrepair, 16% with one indication, 10% with two indications, 4% with three indications, and 3% ranging from four to twelve indications. Due to the assumption of normality for the subsequent models, requiring either dichotomous or normally distributed variables, we dichotomized facility disrepair with 0 equal to no evidence of disrepair, and 1 equal to one or more observations of disrepair. We used this dichotomized facility disrepair composite in the subsequent analyses.

Table 1. ELS:2002 independent rater high school survey facility disrepair items
Click for larger view
View full resolution
Table 1.

ELS:2002 independent rater high school survey facility disrepair items

[End Page 81]

Variables Included in the Analysis

The independent variables included in the subsequent models were selected from the ELS:2002 school-level database based on past literature, indicating significant effects on student academic performance (Rumberger and Palardy 2005; Archibald 2006). Table 2 lists each variable with the mean, standard deviation, minimum, maximum and the ELS:2002 variable label and how the

Table 2. Descriptives and ELS:2002 labels and coding for variables included in the model
Click for larger view
View full resolution
Table 2.

Descriptives and ELS:2002 labels and coding for variables included in the model

[End Page 82]

variable was coded for analysis (Table 2). The dependent variable for all models was grade 12 standardized mathematics test scores in 2004. Student level control variables included the following student background variables:

  • • Female

  • • African American

  • • Hispanic

  • • If the student was from a non-traditional family

  • • If the student had transferred from their 2002 high school to a different high school in 2004

  • • Socioeconomic status (SES)

  • • Student's grade 10 standardized mathematics test score in 2002.

School level variables included the following: first, if the school was urban or suburban, with rural as the reference group. For school enrollment, following the recommendations and procedures from the extensive literature on the effects of school size on student performance (Leithwood and Jantzi 2009; Rumberger and Palardy 2005), school size was split into four categories: (1) small enrollment,(2) large enrollment, and (3) extra-large enrollment, with (4) medium enrollment as the reference category. The three variables of percent free lunch students, percent minority students, and student-teacher ratio, were Common Core of Data (CCD) variables imported into ELS:2002 by NCES (Ingles et al. 2004). Finally, facility disrepair was constructed as discussed above.

Analytic Models

To appropriately estimate the independent effects of school facility maintenance and disrepair on student mathematics achievement, a fixed effects two-level hierarchical linear model (HLM) was used following the recommendations of the multi-level modeling literature (Hox 2002; Raudenbush and Bryk 2002). Hierarchical linear models appropriately model the dependent nature of student and school-level data, nesting students within schools. This allows for the decomposition of the variance in the dependent variable into student-level and school-level variance components, estimating the effects of each variable included in the model at the appropriate level—for a detailed review of HLM, see Hox 2002. HLM allows for the appropriate estimation of school-level effects on student-level outcomes, controlling for the included variables at both the student and school levels. In general, the equations can be expressed as:

inline graphic
inline graphic

[End Page 83]

Where:

  • Yij = Dependent outcome variable for student i in school j, here grade 12 mathematics.

  • Xij = Student level covariates

  • Wj = School level covariates

  • π0 j = The slope of the intercepts varying across schools

  • π1 j = The slope of each covariate across schools

For all models, the statistical program HLM 6.04 (Raudenbush et al. 2004) was used to estimate each model.

Sample Weights

The sampling strategy for the ELS:2002 was not a simple random sample. Rather, the sample used a complex probabilistic sampling procedure to allow for generalizations to all 3.8 million students in the U.S. who were in grade 10 in 2002 (Ingles et al. 2004). However, much like most inferential statistical procedures, HLM assumes a simple random sample. Since this was not the case, a normalized weighting procedure was employed as is recommended for large national database analysis (Strayhorn 2009). The level 1 component of each model was weighted using the normalized F1 panel weight F1XPNLWT, while level 2 was weighted using the normalized school weight BYSCHWT. Thus, rather than assume that each case should be counted equally, applying the appropriate weights at each level adjusts the estimates and standard errors to better reflect the sampling procedure and each case's relative representation in the population. Because normalized weights were used, the sample sizes reported are unchanged by the weighting procedure.

Results

The central aim of this study was to estimate the direct effects of facility maintenance and disrepair on student mathematics achievement during the last two years of high school. We start by first examining how facility disrepair varied across the different student and school-level variables included in the model. Next, we examine the two-level HLM. We estimate an unconditional model first to examine the amount of variance in mathematics test scores at the student and school levels. Then, turn to estimating a sequence of two-level models to examine the direct effects of facility disrepair on overall student mathematics achievement in grade 12, and student growth in mathematics achievement between grades 10 and 12, appropriately controlling for multiple student and school-level covariates. [End Page 84]

Examining Facility Disrepair Variation

To examine the variation in facility disrepair and differences between student and school-level variables, we disaggregated the student and school level variables by facility disrepair (Table 3). For variables that were dichotomous, frequencies are reported as percentages, and chi-square was used to assess if there was a statistically significant difference by facility disrepair. For continuous variables, means were compared using a two-tailed independent t-test. Facility disrepair was dichotomized as either no indicators of disrepair, or one or more indication of disrepair (see Methods). For the variables examined, interesting differences emerged from these descriptive statistics. While the gender of the students in the sample appeared to be evenly distributed between facilities—statistically equivalent percentages of females attended schools with no indicators of facility disrepair versus one or more indicator—other student background variables

Table 3. Comparisons of student and school-level variables, disaggregated by facility disrepair.
Click for larger view
View full resolution
Table 3.

Comparisons of student and school-level variables, disaggregated by facility disrepair.

[End Page 85]

were statistically different between the two types of facilities. Supporting previous research that has indicated an uneven distribution of facility quality across different student background variables (Planty et al. 2006; Ryan 1999) our data indicates that African American and Hispanic students attend schools more often with one or more facility disrepair indicators. In addition, students from non-traditional families in which there is only one parent or guardian in the home, as well as students who transferred high schools, attended schools more often with one or more disrepair indicators than students who had two parents in the home or were students who did not transfer high schools.

In addition, multiple school-level variables varied by facility disrepair. This included school enrollment as well as the percentage of minority students enrolled and the student-teacher ratio. These results suggest that at the descriptive level—in comparison to schools with no facility disrepair indicators—school location and student SES did not appear to vary by disrepair. Schools with one or more disrepair indicators enrolled more students, served a higher percentage of minority students, and had larger class sizes as indicated by higher student-teacher ratios. To date, this is the first study to demonstrate these differences by using statistical tests for differences, a large nationally representative sample, disaggregating by both the student and school levels, independent facility raters, and a specific rating for facility maintenance and disrepair. Nevertheless, as demonstrated in Table 3, both grade 10 and grade 12 mathematics test scores did not appear to vary by facility disrepair. This finding would appear to support the past literature reviewed above that indicated that direct measures of facility condition are not related to student achievement. However, as discussed above, these types of descriptive statistics do not give an indication of the effects of each variable on the outcome when controlling for the other variables. We turn next to examining the controlled influence of each of these variables on student achievement.

A Two-Level Hierarchical Linear Model

To appropriately control for the nested nature of students within schools, we used a two-level hierarchical linear model (HLM) (see Methods). The dependent variable was student grade 12 standardized mathematics test scores, with students at level one nested within schools at level two. Following the recommendations of the multi-level modeling literature (Hox 2002; Raudenbush and Bryk 2002) we first estimated an unconditional "empty" model with no predictors at either level one or level two, which allows for the estimation of the base-line variance in the outcome—individual student grade 12 standardized mathematics test score—as well as the decomposition of the variance at both the student and [End Page 86] school levels. The intra-class correlation for the unconditional model equaled 0.1455, indicating that 14.55% of the variance in grade 12 mathematics scores was at the school level with 85.45% of the variance in the scores at the student level. This replicates the long history of research in the U.S. that has shown that the variance in student achievement within schools is greater than the variance between (Coleman 1990). Thus, only about one-seventh of the variance in student achievement is explainable by the school-level variables, including facility disrepair.

To assess the direct effects of school-level facility disrepair on student-level mathematics achievement, we then estimated two two-level hierarchical linear models (Table 4, Models A and B). Again, student grade 12 mathematics achievement is the dependent outcome variable in all models. Table 4 lists the coefficients, standardized coefficient effect sizes, and standard errors for each variable in both models. Model A includes both school and student-level variables, including school facility disrepair. For the first time, controlling for

Table 4. Two-Level Hierarchical Linear Models Estimating Grade 12 Mathematics Standardized Test Scores.
Click for larger view
View full resolution
Table 4.

Two-Level Hierarchical Linear Models Estimating Grade 12 Mathematics Standardized Test Scores.

[End Page 87]

each of the background and demographic variables at the school and student levels, Model A estimates the direct effect of facility disrepair on overall student mathematics achievement in grade 12 (Table 4, Model A). As indicated in Table 4, facility disrepair was not significant in the model. Therefore, our findings suggest that facility disrepair had no direct effect on overall grade 12 mathematics achievement, controlling for the multiple covariates in the model and the hierarchical nested nature of the data. For the other variables included at the school level, the model replicates previous research (Rumberger and Palardy 2005; Archibald 2006; Lee and Bryk 1989; Printy 2008; Schreiber 2002), indicating a significant negative effect of school-level percent free lunch students. At the student level, female, African American, Hispanic, and transfer variables were all negative and significant in the model, while student SES was positive. The coefficients for each of these variables were in the direction predicted by past literature (Archibald 2006; Tate 1997; Hanushek 1996). The non-traditional family variable was not significant, most likely due to the inclusion of the SES variable.

Model B includes all of the variables of Model A and adds student grade 10 mathematics standardized test scores (Table 4, Model B). One critique of an overall achievement model such as Model A is that it does not focus the model on the achievement during the time of data collection. Without the inclusion of some pre-test score in mathematics, the effects of any significant variables in Model A are effects across a student's entire life course that lead them to the score obtained in mathematics in grade 12. However, including a pre-test score in Model B—grade 10 standardized mathematics score—focuses the model not on overall achievement, but on achievement gains from grade 10 through grade 12. Since the facilities survey of ELS:2002 occurred during 2002 when students were in grade 10, the final model, Model B, appropriately estimates the potential direct effect of facility disrepair on student achievement from grade 10 to grade 12, controlling for prior achievement and effects of schooling prior to grade 10, due to the inclusion of the grade 10 achievement scores. As with Model A, the results of fitting Model B indicate that when controlling for the variables included in the two-level model, facility disrepair had no direct effect on growth in mathematics achievement between grades 10 and 12. Examining the other variables in the model, as expected, grade 10 mathematics test score was the strongest factor in the model—as evidenced by the large standardized coefficient. In addition, a large proportion of the variance was explained at both levels with the inclusion of grade 10 mathematics, replicating previous research (Rivkin, Hanushek, and Kain 2005). Interestingly, while not a focus of the study, the significant school-level variables are of interest. Controlling for the [End Page 88] other variables at level 1 and 2, while urban was positive and significant, small enrollment was also positive and significant with percent free lunch negative and significant. The urban finding is somewhat unexpected. The positive coefficient for urban may be due to increased opportunities and options in an urban environment for a high school student, especially when controlling for school size, school SES, class size and the student-level variables.

Another critique of both models is that facility disrepair could be modeled as none, one to two issues, and three or more, recognizing that facilities with multiple disrepair issues may have an influence when facilities with one or two disrepair issues may not. Both Models A and B were analyzed using three categories of facility disrepair, with one to two disrepair issues (26% of the schools) or three or more (10% of the schools), using none as the reference group (64% of the schools). Again, disrepair was not significant in either model (data not shown).

Discussion

The purpose of this study is to address the previous methodological issues in the research on the relationship of school facility condition to student achievement. We addressed the main issues through using a large nationally representative dataset, independently rated school facilities, a focus on facility maintenance and disrepair rather than on structural features, multiple control variables known to co-vary with student achievement, and a two-level HLM. We found no evidence for a direct effect of facility disrepair on student grade 12 mathematics achievement. We assert that this study is the most controlled study to date, and that our data are generalizable—in that as a replication and extension of Picus et al. (2005) there most likely is not a direct effect of facility condition on student achievement beyond the necessities of sufficient heating, lighting, roofing, etc. However, while we argue that our findings are robust, we also recommend caution in interpreting the given results. Because of the dataset used in the analysis, the results apply only to narrow definitions of school (high school), student achievement (standardized mathematics test scores), and facility condition (facility disrepair).

We analyzed data from the final two years of high school for students who were in grade 10 in 2002. While the data were from a nationally representative dataset, the sample was restricted to the end of the high school experience. It may be that facility condition does directly influence student achievement, but at the elementary, middle or early high school years. In addition, we focused on mathematics achievement. Facilities may have a direct effect on other student outcomes, such as reading achievement, graduation or dropping out, whether a student proceeds on to post-secondary education, or more affective outcomes [End Page 89] such as discipline, participation in extracurricular activities, or enjoyment of school among many others. Our results do not speak to these issues and we encourage future research to focus on either replicating or refuting the results presented here using data from other grade levels as well as testing the effects on other student outcomes.

While we did not find a direct effect of facilities on achievement, we did identify differences in student and school attributes by facility disrepair. As the initial descriptive statistics demonstrated, facility disrepair does not appear to be evenly distributed across the sample, but appears to vary by student ethnicity, poverty, and multiple school variables. Yet our results indicate that when we applied a controlled longitudinal nested model of student achievement, facility disrepair did not have a direct effect on student achievement. This finding goes against intuition. Since facility disrepair appears to vary by student and school demographics and background variables, it stands to reason that cleaner and more maintained quality facilities relate to higher student achievement. In many respects, this point is similar to the longstanding debate over the direct effects of finance on student achievement (Archibald 2006; Hanushek 1997). While we know that achievement gains in subjects such as mathematics are influenced the most by schools (Nye, Konstantopoulous, and Hedges 2004), decades of research have continued to show weak to no relationship of the direct effect of spending on student achievement (Hanushek 1996, 1997). Rather, much of the recent literature indicates that it is not how much a school spends, but how they spend it, implying a mediated model of finance operating through school personnel and procedures that then influence student achievement (Hanushek 1996, 1997; Grubb 2006; Perez and Socias 2008).

Thus, our findings here suggest a similar theory, that the influence of facility maintenance and disrepair may not directly influence student achievement, but may operate through a mediated model (Figure 1). Indeed, recent research points in a similar direction. As reviewed above, much of the research since Picus et al. (2005) has focused on teacher and administrator perception of facilities—rather than on independently rated facility quality as provided here—and has found a positive relationship between facility perception and student outcomes. Authors have begun to move towards positing a mediated model of facility effects over a direct effects model. As stated by Woolner et al. (2007) "the relationship between people and their environment is complex and therefore any outcomes from a change in setting are likely to be produced through an involved chain of events" (p.61). Hence, we propose a mediated model as diagramed in Figure 1 as a potential next step for research in this domain.

The recent work linking educator's perceptions of their facilities to student achievement (Uline and Tschannen-Moran 2008; Uline, Tschannen-Moran, and Wosley 2009; [End Page 90]

Figure 1. Proposed mediated model of facility quality and achievement.
Click for larger view
View full resolution
Figure 1.

Proposed mediated model of facility quality and achievement.

Earthman and Lemasters 2009; Roberts 2009; Fuller et al. 2009) provides the initial evidence for such mediated models of facility effects. However, we argue for two main additions to this line of research. First, we found that facility quality varies by student and school background variables, but that student achievement in mathematics did not vary by facility quality. Thus, on the surface, it appears that our findings suggest that independent ratings of facility maintenance and repair are not related to student achievement. However, that may not be the case. Instead, if we take the mediated effects approach, it may be that actual facility quality, albeit structural or maintenance, directly affects educator's perceptions of their facilities. The perceptions then influence the overall academic and motivational climate of the school, which then influences student achievement up or down (Figure 1). Our contribution to the mediated model theory is that we encourage the inclusion of independent ratings of the facility to help control for subjectivity of the ratings as well as gain an accurate understanding of what is being rated, and if there is an effect found, exactly what should be changed. We also argue for a second addition to this research domain, namely the use of a specific articulated theoretical mediated model, and then testing of such a model using structural equation modeling (SEM). In the present study, we used a two-level hierarchical linear model to appropriately estimate the direct effects of facility disrepair on student achievement. Similar methods exist in the SEM and multi-level SEM field (Kline 2004) that could be used to test such a mediated model as suggested above. Third, such a mediated [End Page 91] model should be longitudinal, and conceivably could work through a feedback mechanism over multiple years in which changes in student achievement influence the variables within the model the following year (Figure 1). We will work towards integrating these types of models and statistics in future work.

In conclusion, the implications of our findings for administrators, policymakers, and researchers is that while we were unable to find a direct effect of facility disrepair on student achievement, this does not necessarily mean that facilities and achievement are not related. As reviewed in the past literature, adequate facilities are most likely necessary for student achievement, but differences in facility maintenance, while unequally distributed across students and schools, may not be sufficient to move test scores either up or down. Due to the amount of taxpayer resources devoted to school facilities, we, along with many of the cited authors, urge for continued research in this area to help administrators best allocate funding for school improvement, be it through improved facilities or not.

Alex J. Bowers

Alex J. Bowers is an Assistant Professor in the College of Education and Human Development, Department of Educational Leadership and Policy Studies at the University of Texas at San Antonio.

Angela Urick

Angela Urick is a doctoral student in the Department of Educational Leadership and Policy Studies at The University of Texas at San Antonio and the Managing Editor of The Review of Higher Education.

References

Archibald, Sarah. 2006. Narrowing in on educational resources that do affect student achievement. Peabody Journal of Education, 81 (4):23-42.
Arsen, David, and Thomas Davis. 2006. Taj Mahals or decaying shacks: Patterns in local school capital stock and unmet capital need. Peabody Journal of Education, 81 (4):1-22.
Berner, M. 1993. Building conditions, parental involvement, and student achievement in the District of Columbia public school system. Urban Education, 28 (1):6-29.
Bowers, Alex J., Scott Alan Metzger, Matthew Militello. 2010a. Knowing the Odds: Parameters that Predict Passing or Failing School District Bonds. Educational Policy, 24(2), 398-420.
Bowers, Alex J., Scott Alan Metzger, Matthew Militello. 2010b. Knowing What Matters: An Expanded Study of School Bond Elections in Michigan, 1998-2006. Journal of Education Finance, 35(4), 374-396.
Buckley, J., M. Schneider, and Y. Shang. 2009. The effects of school facility quality on teacher retention in urban school districts. National Clearinghouse for Educational Facilities, 2004 [cited July 27 2009]. Available from www.edfacilities.org/pubs/teacherretention.htm.
Cash, C. 1993. Building conditions and student achievement and behavior, Virginia Polytechnic Institute and State University, Blacksburg, VA.
Coleman, J. S. 1990. Equality and achievement in education. San Francisco: Westview Press.
Duncombe, William, and Wen Wang. 2009. School facilities funding and capital-outlay distribution in the states. Journal of Education Finance, 34 (3):324-350.
Earthman, Glen I. 2000. The impact of school building conditions, student achievement, and behavior. In The Appraisal of Investments in Educational Facilities. Paris: OECD - Organisation for Economic Co-operation and Development.
Earthman, Glen I., and Linda K. Lemasters. 2009. Teacher attitudes about classroom conditions. Journal of Educational Administration, 47 (3):323-335.
Fuller, Bruce, Luke Dauter, Adrienne Hosek, Greta Kirschenbaum, Deborah McKoy, Jessica Rigby, and Jeffrey M. Vincent. 2009. Building schools, rethinking quality? Early lessons from Los Angeles. Journal of Educational Administration, 47 (3):336-349.
GAO. 1995. School facilities: Conditions of America's schools. edited by U. S. G. A. Office. Washington, D.C.: U.S. General Accounting Office.
Grubb, W.N. 2006. When money might matter: Using NELS88 to examine the weak effects of school funding. Journal of Education Finance, 31 (4):360-378. [End Page 92]
Hanushek, Eric A. 1996. School resources and student performance. In Does money matter: The effect of school resources on student achievement, edited by G. Burtless. Washington, DC: Brookings Institution Press.
Hanushek, Eric A. 1997. Assessing the effects of school resources on student performance: An update. Educational Evaluation and Policy Analysis, 19 (2):141-164.
Harris, Mary H., and Vincent G. Munley. 2002. The sequence of decisions facing school district officials in the bond issuing process: A multistage model. Journal of Education Finance, 28 (1):113-131.
Hawkins, H., and B Overbaugh. 1988. The interface between facilities and learning. CEFP Journal, July-August:4-7.
Hill, Jason G., and F Johnson. 2005. Revenues and expenditures for public elementary and secondary education: School year 2002-03. Washington, DC: U.S. Department of Education, National Center for Education Statistics.
Hines, E. 1996. Building condition and student achievement and behavior. Virginia Polytechnic Institute and State University, Blacksburg, VA.
Hox, Joop. 2002. Multilevel analysis: Techniques and applications. Mahway, New Jersey: Lawrence Erlbaum Associates.
Ingle, W.K., P.A. Johnson, and R.A. Petroff. In press. The politics and resource costs of levy campaigns in Ohio School Districts.
Ingles, Steven J., Daniel J. Pratt, James E. Rogers, Peter H. Siegel, Ellen S. Stutts, and Jeffrey A. Owings. 2004. Education longitudinal study of 2002: Base year data file user's manual. Washington, D.C.: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
Ingles, Steven J., Daniel J. Pratt, David Wilson, Laura J. Burns, Douglas Currivan, James E. Rogers, and Sherry Hubbard-Bednasz. 2007. Education longitudinal study of 2002: Base-year to second follow-up data file documentation. Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
Johnson, P.A., and W.K. Ingle. 2009. Campaign strategies and voter approval of school referenda: A mixed methods analysis. Journal of School Public Relations, 30 (1):51-71.
Kennedy, Eugene, and Garrett Mandeville. 2000. Some methodological issues in school effectiveness research. In The international handbook of school effectiveness research, edited by C. Teddlie and D. Reynolds. New York: Falmer Press.
Kline, R.B. 2004. Principles and practice of structural equation modeling. 2nd ed. New York: Guilford Press.
Lee, Valerie E., and Anthony S. Bryk. 1989. A multilevel model of the social distribution of high school achievement. Sociology of Education, 62 (3):172-192.
Leithwood, Kenneth, and Doris Jantzi. 2009. A review of empirical evidence about school size effects: A policy perspective. Review of Educational Research, 79 (1):464-490.
Lowe, J. 1990. The interface between educational facilities and learning climate in three elementary schools. Texas A&M University, College Station, TX.
Maxwell, L. 2000. A safe and welcoming school: What students, teachers and parents think. Journal of Architectural and Planning Research, 17 (4):271-282.
McGuffey, C., and C. Hendricks Brown. 1978. The impact of school building age on school achievement in Georgia. The Educational Facility Planner, 2:5-19.
Militello, Matthew, Scott Alan Metzger, Alex J. Bowers. 2008. The High School "Space Race": Implications of a Market-Choice Policy Environment for a Michigan Metropolitan region. Education and Urban Society, 41(1), 26-54.
Muir, Edward, and Krista Schneider. 1999. State initiatives and referenda on bonds: Analysis of one solution for the school infrastructure crisis. Journal of Education Finance, 24 (4):415-433.
NCEF. 2010. National Clearinghouse for Educational Facilities. National Institute of Building Sciences 2010 [cited Feb 24 2010]. Available from http://www.edfacilities.org/ds/index.cfm.
NCES. 2000. Condition of America's public school facilities: 1999. edited by N. C. f. E. Statistics. Washington D.C. [End Page 93]
———. 2010. Education longitudinal study of 2002 facilities checklist. National Center for Education Statistics, U.S. Department of Education 2002 [cited Jan. 15 2010]. Available from http://nces. ed.gov/surveys/els2002/pdf/Facilities_checklist_baseyear.pdf.
Nye, B., Spyros Konstantopoulous, and L.V. Hedges. 2004. How large are teacher effects? Educational Evaluation and Policy Analysis, 26:237-257.
O'Neill, D., and A. Oates. 2001. The impact of school facilities on student achievement, behavior, attendance and teacher turnover rate in central Texas middle schools. CEFPI Educational Facility Planner, 36 (3):14-22.
Perez, M., and M. Socias. 2008. Highly successful schools: What do they do differently and at what cost. Education Finance and Policy, 3 (1):109-129.
Picus, Lawrence O., Scott F. Marion, Naomi Calvo, and William J. Glenn. 2005. Understanding the relationship between student achievement and the quality of educational facilities: Evidence from Wyoming. Peabody Journal of Education, 80 (3):71-95.
Planty, Mike, Jill F. DeVoe, Jeffrey A. Owings, and Kathryn Chandler. 2006. An examination of the conditions of school facilities attended by 10th-grade students in 2002. Washington, DC: U.S. Department of Education, National Center for Education Statistics.
Printy, Susan M. 2008. Leadership for teacher learning: A community of practice perspective. Educational Administration Quarterly, 44 (2):187-226.
Raudenbush, Stephen W., and Anthony S. Bryk. 2002. Hierarchical linear models: Applications and data analysis methods. 2nd ed. Thousand Oaks: Sage.
Raudenbush, Stephen W., Anthony S. Bryk, Yuk Fai Cheong, Richard Congdon, and Mathilda duToit. 2004. HLM 6: Hierarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software International, Inc.
Rivkin, Steven G., Eric A. Hanushek, and John F. Kain. 2005. Teachers, schools and academic achievement. Econometrica, 73 (2):417-458.
Roberts, Lance W. 2009. Measuring school facility conditions: an illustration of the importance of purpose. Journal of Educational Administration, 47 (3):368-380.
Rumberger, Russell W., and Gregory J. Palardy. 2005. Test scores, dropout rates, and transfer rates as alternative indicators of high school performance. American Educational Research Journal, 42 (1):3-42.
Ryan, J. 1999. The influence of race in school finance reform. Michigan Law Review, 98 (2):432-481.
Schneider, Mark. 2002. Do school facilities affect academic outcomes? Washington, DC: National Clearinghouse for Educational Facilities.
Schneider, Mark. 2003. Linking school facility conditions to teacher satisfaction and success. Washington, DC: National Clearinghouse for Educational Facilities.
Schreiber, J. 2002. Institutional and student factors and their influence on advanced mathematics achievement. The Journal of Educational Research, 95 (5):274-286.
Sielke, Catherine C. 2001. Funding school infrastructure needs across the states. Journal of Education Finance, 27 (2):653-662.
Sielke, Catherine C., John Dayton, Thomas Holmes, and Anne L. Jefferson. 2001. Public school finance programs of the U.S. and Canada: 1998-99. edited by N. C. f. E. S. U.S. Department of Education. Washington, D.C.
Strayhorn, Terrell L. 2009. Acessing and analyzing national databases. In Handbook of data-based decision making in education, edited by T. J. Kowalski and T. J. Lasley. New York, NY: Routledge.
Tate, W. 1997. Race-ethnicity, SES, gender, and language proficiency trends in mathematics achievement: An update. Journal of Research in Mathematics Education, 28 (6):652-679.
Uline, Cynthia L., and Megan Tschannen-Moran. 2008. The walls speak: the interplay of quality facilities, school climate, and student achievement. Journal of Educational Administration, 46 (1):55-73.
Uline, Cynthia L., Megan Tschannen-Moran, and Thomas DeVere Wosley. 2009. The walls still speak: the stories occupants tell. Journal of Educational Administration, 47 (3):400-426.
Woolner, Pamela, Elaine Hall, Steve Higgins, Caroline McCaughey, and Kate Wall. 2007. A sound foundation? What we know about the impact of environments on learning and the implications for Building Schools for the Future. Oxford Review of Education, 33 (1):47-70. [End Page 94]

Footnotes

* A previous version of this article was presented at the annual meeting of the American Education Finance Association (AEFA) in Richmond VA, in March of 2010.

Share