Browse Results For:
Evolving Analytic Approaches
Policy analysis has grown increasingly reliant on the random assignment experiment—a research method whereby participants are sorted by chance into either a program group that is subject to a government policy or program, or a control group that is not. Because the groups are randomly selected, they do not differ from one another systematically. Therefore any differences between the groups at the end of the study can be attributed solely to the influence of the program or policy. But there are many questions that randomized experiments have not been able to address. What component of a social policy made it successful? Did a given program fail because it was designed poorly or because it suffered from low participation rates? In Learning More from Social Experiments, editor Howard Bloom and a team of innovative social researchers profile advancements in the scientific underpinnings of social policy research that can improve randomized experimental studies. Using evaluations of actual social programs as examples, Learning More from Social Experiments makes the case that many of the limitations of random assignment studies can be overcome by combining data from these studies with statistical methods from other research designs. Carolyn Hill, James Riccio, and Bloom profile a new statistical model that allows researchers to pool data from multiple randomized-experiments in order to determine what characteristics of a program made it successful. Lisa Gennetian, Pamela Morris, Johannes Bos, and Bloom discuss how a statistical estimation procedure can be used with experimental data to single out the effects of a program’s intermediate outcomes (e.g., how closely patients in a drug study adhere to the prescribed dosage) on its ultimate outcomes (the health effects of the drug). Sometimes, a social policy has its true effect on communities and not individuals, such as in neighborhood watch programs or public health initiatives. In these cases, researchers must randomly assign treatment to groups or clusters of individuals, but this technique raises different issues than do experiments that randomly assign individuals. Bloom evaluates the properties of cluster randomization, its relevance to different kinds of social programs, and the complications that arise from its use. He pays particular attention to the way in which the movement of individuals into and out of clusters over time complicates the design, execution, and interpretation of a study. Learning More from Social Experiments represents a substantial leap forward in the analysis of social policies. By supplementing theory with applied research examples, this important new book makes the case for enhancing the scope and relevance of social research by combining randomized experiments with non-experimental statistical methods, and it serves as a useful guide for researchers who wish to do so.
Analyse de contenu. Théorie et principes généraux. Étapes de l'analyse de contenu. Modèle général. Concept de soi. Cadre théorique et méthodologique. Déroulement de l'analyse de contenu. Problématique théorique entre les approches dites «objectives» et «subjectives» dans l'étude du concept de soi. Fidélité des résultats obtenus avec la méthode GPS. Validité des résultats obtenus avec la méthode GPS.
Tools for the Crafts of Knowledge and Decision
Marginalism and Discontinuity is an account of the culture of models employed in the natural and social sciences, showing how such models are instruments for getting hold of the world, tools for the crafts of knowing and deciding. Like other tools, these models are interpretable cultural objects, objects that embody traditional themes of smoothness and discontinuity, exchange and incommensurability, parts and wholes.
Martin Krieger interprets the calculus and neoclassical economics, for example, as tools for adding up a smoothed world, a world of marginal changes identified by those tools. In contrast, other models suggest that economies might be sticky and ratchety or perverted and fetishistic. There are as well models that posit discontinuity or discreteness. In every city, for example, some location has been marked as distinctive and optimal; around this created differentiation, a city center and a city periphery eventually develop. Sometimes more than one model is applicable—the possibility of doom may be seen both as the consequence of a series of mundane events and as a transcendent moment. We might model big decisions or entrepreneurial endeavors as sums of several marginal decisions, or as sudden, marked transitions, changes of state like freezing or religious conversion.
Once we take models and theory as tools, we find that analogy is destiny. Our experiences make sense because of the analogies or tools used to interpret them, and our intellectual disciplines are justified and made meaningful through the employment of characteristic toolkits—a physicist's toolkit, for example, is equipped with a certain set of mathematical and rhetorical models.
Marginalism and Discontinuity offers a provocative and wide-ranging consideration of the technologies by which we attempt to apprehend the world. It will appeal to social and natural scientists, mathematicians and philosophers, and thoughtful educators, policymakers, and planners.
Mapping Risks and Resilience
The Measure of America, 2010-2011, is the definitive report on the overall well-being of all Americans. How are Americans doing—compared to one another and compared to the rest of the world? This important, easy-to-understand guide will provide all of the essential information on the current state of America.
This fully illustrated report, with over 130 color images, is based on the groundbreaking American Human Development Index, which provides a single measure of the well-being for all Americans, disaggregated by state and congressional district, as well as by race, gender, and ethnicity. The Index rankings of the 50 states and 435 congressional districts reveal huge disparities in the health, education, and living standards of different groups. For example, overall, Connecticut ranked first among states on the 2008-2009 Index, and Mississippi ranked last, suggesting that there is a 30-year gap in human development between the two states. Further, among congressional districts, New York's 14th District, in Manhattan, ranked first, and California's 20th District, near Fresno, ranked last. The average resident of New York's 14th District earned over three times as much as the average resident of California's 20th District, lived over four years longer, and was ten times as likely to have a college degree.
The second in the American Human Development Report series, the 2010-2011 edition features a completely updated Index, new findings on the well-being of different racial and ethnic groups from state to state, and a closer look at disparities within major metro areas. It also shines a spotlight on threats to progress and opportunity for some Americans as well as highlighting tested approaches to fosteringresilience among different groups.
Using a revelatory framework for explaining the very nature of humanprogress, this report can be used not only as a way to measure America but also to build upon past policy successes, protect the progress made over the last half century from new risks, and create an infrastructure of opportunity that can serve a new generation of Americans. Beautifully illustrated with stunning four-color graphics that allow for a quick visual understanding of often complex but important issues, The Measure of America is essential reading for all Americans, especially for social scientists, policy makers, and pundits who want to understand where Americans stand today.
Social science research often yields conflicting results: Does juvenile delinquent rehabilitation work? Is teenage pregnancy prevention effective? In an effort to improve the value of research for shaping social policy, social scientists are increasingly employing a powerful technique called meta-analysis. By systematically pulling together findings of a particular research problem, meta-analysis allows researchers to synthesize the results of multiple studies and detect statistically significant patterns among them.
Meta-Analysis for Explanation brings exemplary illustrations of research synthesis together with expert discussion of the use of meta-analytic techniques. The emphasis throughout is on the explanatory applications of meta-analysis, a quality that makes this casebook distinct from other treatments of this methodology. The book features four detailed case studies by Betsy Jane Becker, Elizabeth C. Devine, Mark W. Lipsey, and William R. Shadish, Jr. These are offered as meta-analyses that seek both to answer the descriptive questions to which research synthesis is traditionally directed in the health and social sciences, and also to explore how a more systematic method of explanation might enhance the policy yield of research reviews.
To accompany these cases, a group of the field’s leading scholars has written several more general chapters that discuss the history of research synthesis, the use of meta-analysis and its value for scientific explanation, and the practical issues and challenges facing researchers who want to try this new technique. As a practical resource, Meta-Analysis for Explanation guides social scientists to greater levels of sophistication in their efforts to synthesize the results of social research.
"This is an important book...[it is] another step in the continuing exploration of the wider implications and powers of meta-analytic methods." —Contemporary Psychology
L'apport de la théorie des réponses aux items
L'élaboration de stratégies, de techniques et d'instruments de mesure standardisés pour effectuer des relevés, des prévisions et des comparaisons ne date pas d'hier. Mais quelles sont les stratégies de modélisation de la mesure qui peuvent être utiles aux chercheurs en sciences sociales ? Dans cet ouvrage, à la fois guide d'apprentissage et manuel de référence, les auteurs présentent les concepts et méthodes nécessaires à la compréhension de divers modèles de mesure (théorie classique, théorie de la généralisabilité, théorie des réponses aux items) et décrivent, pour chacun, les conditions d'application, les notions d'erreur, de score vrai, d'erreur-type, d'estimation de la fidélité et d'analyse d'items. Exercices et corrigés permettront, tant aux étudiants qu'aux praticiens en psychologie comme en éducation, d'assimiler ces méthodes et de mieux en évaluer la pertinence en fonction des situations rencontrées.
Social Theory Between Bakhtin and Habermas
Greg M. Nielsen brings Mikhail Bakhtin’s ethics and aesthetics into a dialogue with social theory that responds to the sense of ambivalence and uncertainty at the core of modern societies. Nielsen situates a social theory between Bakhtin’s norms of answerability and Jürgen Habermas’s sociology, ethics, and discourse theory of democracy in a way that emphasizes the creative dimension in social action without reducing explanation to the emotional and volitional impulse of the individual or collective actor. Some of the classical sources that support this mediated position are traced to Alexander Vvedenskij’s and Georg Simmel’s critiques of Kant’s ethics, Hermann Cohen’s philosophy of fellowship, and Max Weber’s and George Herbert Mead’s theories of action. In the shift from Bakhtin’s theory of interpersonal relations to a dialogic theory of societal events that defends the bold claim that law and politics should not be completely separated from the specificity of ethical and cultural communities, a study of citizenship and national identity is developed.
Historical and Critical
"A richly erudite history of measurement and an account of its current state in the social sciences—fascinating, informative, provocative." —James S. Coleman, Unversity of Chicago
"Wise and powerful." — American Journal of Sociology
"Personal and provocative—an excellent set of historical and critical ruminations from one of social measurement's greatest contributors." —Choice
In the past twenty years, the number of educational tests with high-stakes consequences—such as promotion to the next grade level or graduating from high school—has increased. At the same time, the difficulty of the tests has also increased. In Texas, a Latina state legislator introduced and lobbied for a bill that would take such factors as teacher recommendations, portfolios of student work, and grades into account for the students—usually students of color—who failed such tests. The bill was defeated.
Using several types of ethnographic study (personal interviews, observations of the Legislature in action, news broadcasts, public documents from the Legislature and Texas Education Agency), Amanda Walker Johnson observed the struggle for the bill’s passage. Through recounting this experience, Objectifying Measures explores the relationship between the cultural production of scientific knowledge (of statistics in particular) and the often intuitive resistance to objectification of those adversely affected by the power of policies underwritten as "scientific."