In lieu of an abstract, here is a brief excerpt of the content:

The International Association for the Evaluation of Educational Achievement (IEA) was founded in 1959 by a small group of education and social science researchers with the purpose of using international comparative research to understand the great complexity of factors influencing student achievement in different subject fields. A popular metaphor was that they wanted to use the world as an educational laboratory. The first study, which investigated mathematics achievement in twelve countries, was conducted in 1964.1 Since the publication of that study, different groups of researchers have published, under the auspices of the IEA, a large number of studies of education achievement in different countries in a wide range of subject areas. For example, in the first round in 1995 of the TIMSS study (at that time TIMSS was an acronym for Third International Mathematics and Science Study), which investigated knowledge and skill in mathematics and science, forty-five countries participated.2 For the third round of TIMSS (TIMSS now stands for Trends in International Mathematics and Science Study),3 which was conducted in 2003, more than fifty countries participated, and for the fourth round in 2007 an even larger number of countries will participate . Not only has the number of participating countries increased dramatically , but so too has the frequency of repetition; the studies of mathematics, Understanding Causal Influences on Educational Achievement through Analysis of Differences over Time within Countries jan-eric gustafsson 3 37 38 jan-eric gustafsson science, and reading are now designed to capture within-country achievement trends and are therefore repeated every fourth or fifth year. The data collected in these studies have been used to generate a vast amount of knowledge about differences in achievement from country to country. Because the data have been made freely and easily available, they have been used in a large number of secondary analyses by researchers in different fields such as economics, education, sociology, and didactics of different subject matter areas. Voices of criticism also have been raised against them, however. One line of criticism holds that the international studies have come to serve primarily as a source of benchmarking data for purposes of educational policy and debate, thereby becoming a means of educational governance that reduces the importance and influence of national policymakers.4 This benchmarking function grew in importance during the 1990s, in part because the international studies adopted the methodology developed in the National Assessment of Educational Progress (NAEP) in the United States in the 1980s, which was based on complex item-response theory and matrix-sampling designs.5 This methodology was well suited for making efficient and unbiased estimations of country-level performance. In addition, the increasing number of participating countries made the benchmarking function more interesting. That became even more pronounced when the Organization for Economic Cooperation and Development (OECD) also started international surveys of educational achievement through the Program for International Student Assessment (PISA).6 The OECD presence led to even greater emphasis on the economic importance of the educational results. Another line of criticism contends that the “world educational laboratory” has not been particularly successful in disentangling the complex web of factors that produce a high level of knowledge and skills among students. Even though advances have been made, a great deal remains to be learned, and doubts have been expressed that cross-sectional surveys offer the appropriate methodology for advancing this kind of knowledge. Indeed, Allardt argues that there is little evidence that comparative surveys in any field of social science have been able to generate knowledge about causal relations.7 He points to the great complexity of the phenomena investigated, and to the uniqueness of different countries, as the reasons for this. The observation that cross-sectional surveys do not easily allow causal inference is made in many textbooks on research design, so the methodological challenges are well known. Furthermore, the international studies of educational achievement are not based upon an elaborated theoretical framework, which makes it difficult to apply the analytical methods developed for making causal inference from cross-sectional data.8 It would be unfortunate if these difficulties prevented researchers from seeking the causes behind the patterns revealed in comparative educational surveys. [3.144.17.45] Project MUSE (2024-04-25 15:29 GMT) The search for explanations is one of the main aims of scientific research, and explanations also are needed if policymakers are to take full advantage of the benchmarking results. This chapter argues that causal inferences might...

Share