In lieu of an abstract, here is a brief excerpt of the content:

145 Appendix A Methodology I conducted this study as a performance analysis, meaning my aim was to not only describe how organizations implement their programs, but also to relate those practices to measures of performance (Mead 2003). First, I undertook exploratory fieldwork to form hypotheses about what affects performance , conducting full-day site visits at 20 of the 26 sites during November 2004 through March 2005. Sites were chosen with input from New York City’s HRA to include a range of organizational sizes and types, as well as high, average , and low performers in terms of placement and retention. I conducted the site visits “blind” to performance, meaning that I did not know the programs’ performance levels, to allow for the formation of unbiased hypotheses.1 Organizational characteristics were identified that represented the clearest differences among programs. Interviews were semistructured (Weiss 1994), which allowed me to discuss new or unexpected determinants of program performance. An interview protocol was created based on topics investigated by past studies of welfare-towork programs (Behn 1991; Bardach, 1993; Bloom and Michalopoulos, 2001; Bloom, Hill, and Riccio, 2003). It covered issues relating to organizational strategy, current operations, and the use of specific practices. CHARACTERIZING SITE-LEVEL PRACTICES Performance analyses require quantitatively characterizing the organizational characteristics at each program. For some variables, administrative data exist, including deassignment rates, referrals to sanctions, and program size. Programs’ use of training is measured by the percentage of participants at each program that receive training vouchers. To measure the degree to which programs use a quick-placement approach versus a case-management approach, average placement speed is used, which is measured as the number of days it takes a program to place the median participant among placed participants only. It is assumed that programs with faster placement speeds have greater urgency about getting people employed, whereas those with slower speeds have a greater focus on case management and other job-readiness activities.2 These assumptions are supported by fieldwork. Site visits and interviews sug- 146 Feldman gest a strong connection between programs’ focus on quick placement versus case management and their placement speeds. An example is the program mentioned in Chapter 7 whose director had redesigned his program to emphasize quick placement, cutting the time participants spent in workshops and having people meet with job developers more quickly. This site had the third fastest placement speed, with a median of 42 days to placement. As an example from the other end of the spectrum, staff at the program with the second-slowest placement speed (83 days to placement) had a strong commitment to helping people become job ready and not to “push” anyone into a job.3 Staff characterized themselves as not “numbers focused,” and the director said that making good job matches was more important than achieving placement milestones. Programs with moderate placement speeds typically display a mixed approach, with case managers playing an important role, but staff having a sense of urgency about placement. Evidence that placement speed is linked to programs’ sense of urgency about placement and level of emphasis on barrier removal is also supported by the fact that other potential influences of placement speeds do not appear to play significant roles. First, the number of job developers relative to program size has no statistically significant correlation with placement speed or performance, making it unlikely that staff capacity, rather than staff actions, drives the connection between placement speeds and performance. Second, the use of random assignment within boroughs for most individuals means that unmeasured demographic characteristics are less likely to influence placement speed differences. Between boroughs, average placement speeds differ modestly . Finally, fieldwork did not suggest a connection between a longer time to placement and staff incompetence. A legitimate concern with using placement speed as a variable is that it is partially endogenous. The fact that the variable is constructed based on the outcomes of placed participants only, rather than all participants, helps limit endogeneity. Moreover, when this variable is removed from the regression models presented below, results are similar.4 DEFINING THE SAMPLE At the organizational level, the sample includes all 26 programs. The analysis focuses on programs rather than the 19 providers (some providers run more than one program). Although programs run by the same firms implemented roughly similar strategies, program leaders had discretion in choosing operational emphases. In fact, staff who had worked at more than one program [18.119.213.235] Project MUSE (2024-04-19 11:18 GMT) Appendix A...

Share