publisher colophon

98CHAPTER 8

Evaluating Programs for Working Adults

By MARIA-LUZ D. SAMPER and STANLEY ROSEN

This chapter deals with evaluation as an educational tool that benefits institutions, specific educational programs, classroom activities, and individual teachers and students. We believe that labor education programs can use evaluation to improve and enrich their activities. Knauss has observed that many labor educators become involved in philosophical arguments about Skinnerian approaches, behavioral objectives, and the like.1 Approaches to evaluation range from purely behavioristic to humanistic, depending on the purposes of the evaluation and the philosophy of the evaluator.

The behavioristic approach, which is based on rigid evaluations of outcomes, has a limited use for labor education, since evaluations cannot be confined to the measurable outcomes of the programs or to the observable actions of the student; changes in attitudes are not easy to measure. The humanistic approach, therefore, better suits educational programs for adults, who have complex and varying motives for participating in programs. Adult learning cannot take place by manipulating behavior. The subject of study must be close to adult experiences, meaningful, and useful for the everyday tasks at work and at leisure; and information must be geared to adults’ self-concepts if it is to help them.

Why Evaluate Labor Education Programs?

Evaluation can force planners to clarify general goals as well as specific educational objectives, and to find out how well they did in achieving them. Over time, it encourages accumulating information and insight useful to high-quality planning. Evaluation requires that planners and staff constantly justify the program’s priorities. Registrants, sponsors, and planners want certain subjects treated and certain skills learned. This critical and constructive feedback is important to utilize limited time effectively.

Evaluation helps establish and justify program effectiveness, convincing unions and individuals of their value. Also, union and university staffs must be able to account for their use of time and resources. As Worthen and Sanders state, evaluation can help ensure that new programs are more than random adoptions of faddish innovations.2 Evaluation has helped to justify the need for education programs for union women, for example. The evaluation process can be a help to detecting and diagnosing problems, and can point out good practice, provide constructive criticism of poor practice, and, overall, improve the educational activity.

Evaluation can help teachers prepare, formulate objectives, develop exercises and student involvement, and effectively use handouts and materials. As Cronbach states, one of evaluation’s greatest services in the classroom is to identify elements of the course where revision is desirable.3

The evaluation process encourages communication among members of the program staff, as the process requires a great deal of interaction. Staff members find themselves talking about aspects of the program they do not usually discuss. This results in an overview of the program; a systematic and personal exchange of insights and experiences in a cooperative framework; and growth and development for the teacher and labor educator. Evaluation also encourages more open communication between staff and participants, which can show students that the staff values their comments and criticisms. Evaluation encourages constructive self-criticism, and helps paricipants learn to get the most out of an educational experience. Participants are encouraged to express needs openly, creating an environment of “freedom from fear, freedom from being ridiculed, freedom to experiment, to take risks and explore personal meaning.”4 Students force instructors clearly to state their standards and expectations. Because some participants may go on themselves to teaching, this interchange serves a valuable educational function.

Evaluation can also serve to gather a comprehensive description of the program. “On the surface this seems trivial. However, much evidence and experience suggest that describing any educational activity is a difficult task . . . and one notices that those people in a program frequently cannot see the total program in its entirety.”5 This is particularly true of inter-union or inter-university activities. Systematically collecting information about students, materials, and program goals is pure gold, after the energy and excitement of the conference are over. For the evaluation process records all the results of the program, unexpected as well as expected, tangible as well as intangible.

Types of Evaluation

All labor educators can be conversant in the purposes and operations of evaluation, as the skills involved are not mysterious and can be learned from others who have carried out evaluations, through training and through practice. Cooperation of labor education centers with their respective university evaluation centers, where these are available, may be helpful.

Informal

An informal evaluation involves casual and subjective observation of students’ comments and teachers’ reactions. Such opinions as “That was a good meeting,” or, “I didn’t like the program,” deserve further exploration. “Why did you consider that a good meeting?” “What would you recommend to make it even better?” “What aspects of the program did you not enjoy?” are some of the questions for further exploration. The informal evaluator should be able to observe, to listen, to obtain feedback from participants, and to interpret that feedback. She should have the trust of group members.

Informal evaluation is often underestimated by educators. Forest declares that “what is critical in effectively using informal evaluations, is to recognize their existence, their varied nature, their control and influence over future decisions, and therefore their importance.”6

Formal

Stake suggests that educators must use more systematic methods of evaluation that describe activities and judge what happened during the program.7 Check lists, tests, visits by colleagues, follow-up surveys, questionnaires, rating scales, biographical data, and anecdotal records are some widely used forms.

Summative

The summative evaluation, which takes place at the end of a program, is used to estimate how the final results compare with the stated objectives. For example, any leadership school aims to increase participants’ awareness of their potential as leaders, and of additional skills and training they might need. The summative evaluation includes questions that will show if these have been identified. This can be done through whatever means evaluator and participants find comfortable for them. An example of this is the “Staff Evaluation Form,” below, in which reminders of the objectives of school courses and workshops are provided to assist staff in thinking through the level to which these were accomplished.

Formative

The formative evaluation requires continuous feedback to program developers and / or instructors. It allows for changes in activities, development of new materials, even modification of objectives. This provides immediate feedback to program decision makers. However, it requires self-confidence and experience on the part of the evaluator, the staff, and the participants, as well as a non-threatening environment where trust and freedom are present. This may not be possible in new programs. However, this type was used with outstanding success at the Northeastern Summer Schools for Union Women, where the evaluator was in an environment of openness and trust.

Evaluation at the Summer Schools for Union Women

Informal

In 1976, at the first week-long Northeastern Summer School for Union Women, an informal evaluation was made. The evaluator, who had not been part of the school’s planning committee, received a description of the purposes of the school. She visited classes and activities, observing teacher-student interactions. All comments were important to the evaluator. Over lunch or dinner, or even at break time between classes, teachers’ and students’ comments were sought. Physical facilities (food, lodging, recreation facilities) were observed, as well as teaching methods, schedule development, class and workshop content, and student participation. The evaluator’s main concern was to record events non-judgmentally. Those events assumed more significance later on, when analyzed within the context of the whole school. At the end of the week the purposes of the school were analyzed one by one, and compared with the evaluator’s observations, in a report submitted to the school planners.

Formal

The second Northeastern Summer School’s planning committee began its work with a careful look at the previous year’s informal evaluation. The purposes of the school were discussed; they were found still valid, and no revisions were made. Planning of specific programs was aided by the evaluation report, which for the staff emerged as a helpful tool. As a result, new evaluation elements were added to help obtain in-depth information at the second year’s school. A new registration form was designed to obtain specific demographic data about participants: type of work, age, education, favorite activities, family, community, and union responsibilities, and expectations from the school. This information was grouped, analyzed, and shared with the teachers at the beginning of the school. It provided teachers with a clearer picture of who the students were, and what their interests and needs were.

Two new questionnaires, one each for students and teachers, were designed to yield a more thorough evaluation. It permitted feelings, criticisms, and suggestions to be better expressed. So that the questionnaires would reflect how the program was affected by on-site human and physical resources, they were revised at the school. They were administered to all students and teachers the last day of the school. In addition to this formal evaluation, an ongoing informal evaluation also was conducted.

At the end of the school, all questionnaires were coded, the information was analyzed against the initial intent and purposes of the school, the human and physical components of the school (staff, students, materials, physical facilities), the teaching strategies (workshops, panels, films, rap sessions), the follow-up suggestions, and the candid comments of teachers and students for improving future schools. Once the data were analyzed, the information derived from the formal instruments used in the evaluation (registration forms and evaluation questionnaires) was combined with the evaluator’s informal observations. These were incorporated in a report that included a profile of the participants, the program components as evaluated by students and teachers, and general conclusions on the school’s planning and development, on the program’s goals and objectives, on the characteristics of staff and students, and on the evaluation of the staff.

Formative

Another dimension was added to the summer school evaluation beginning with the third year. In addition to using the registration and evaluation forms and conducting the informal evaluation, there was feedback to staff and planners during the school. Staff meetings allowed the evaluator to bring up results of the evaluation of the previous school and to point out how the current school related to those results. That motivated several changes in activities and reminded teachers of previous experiences and evaluations. On the individual level, several teachers discussed with the evaluator particular problems in a class or activity. Where teacher and evaluator agree beforehand on aspects to be observed and evaluated, this can be of particular value. All conditions and expectations should be spelled out before formative evaluation takes place.

How to Conduct an Evaluation

Evaluation is a common-sense activity that labor education practitioners can handle. Experience and practice, augmented by reading and training in evaluation techniques, can make almost anyone an accomplished evaluator.

The starting assumption is that labor education activities benefit from a conscientious, serious evaluation effort. The concerned parties include the evaluator, students, decision makers, and related “others”—community, university, or union personnel. The evaluator’s responsibilities relate to all of them and, in turn, their cooperation is important to the success of the evaluation effort. The following are suggested steps in conducting an evaluation:

1. Get acquainted with the total program, the issues involved, and complete details of the program’s components and what is expected from it. Examine the stated goals or objectives. If there is a priority list attached to the goals of the program, the evaluator should know what this is. Meet with program planners and staff.

2. Choose the type of evaluation that best fits the program, according to the goals and activities included, the planners’ expectations, future developments expected from the program, and the receptivity of planners towards evaluation. The evaluator’s time and facilities are also critical considerations. An informal evaluation may require the evaluator’s time during the program but no expenditures in duplicating formal instruments, while a formal and summative evaluation means administering a questionnaire the last day of class and coding it afterwards. A formative evaluation requires an open and congenial environment, duplication of formal instruments, and time to provide continuous feedback.

3. Collect pertinent data. The type of evaluation chosen determines the kind of information collected. The evaluator should decide whether a final evaluation is sufficient or if a series of evaluative activities should be developed throughout the program. Observation is crucial to the informal evaluation, since all of what is happening is essential for the evaluator. Keep detailed and clear notes on observations of activities as well as on students and teachers. Data collection for a more formal evaluation includes questionnaires, interviews, historical inquiry, tests, check lists, and rating scales. Plan for time to administer the instruments.

4. Code the results. This time-consuming job is done after the school is ended. If the evaluator does not have time, it may be better to conduct a different type of evaluation, rather than design and collect questionnaires that will not be used.

5. Interpret the results and make recommendations. A coherent report should be presented to the planners, including all necessary information for use in planning future programs. The results of the evalutaion should be presented to the appropriate users in time to affect decision making. The report should be simple and clear, and should summarize the evaluation, as well as present details.

Evaluation will not provide all answers to all people. It may raise new issues and spark discussion. It may be used by planners ready for change who just need the “hard evidence” that the evaluation can provide.

The Inside vs. the Outside Evaluator

Should the evaluator be part of the program or institution, or should she not have ties to either? In part, the type of evaluation used determines who the evaluator can be. In course evaluation a staff member can be the evaluator. This allows for continuity of feedback. However, when an entire program is being evaluated, the question of insider vs. outsider should be considered. According to Weiss, neither has a monopoly on the advantages.8 Factors to consider include: the program administrators’ confidence in the competence of the evaluator; the evaluator’s objectivity; the evaluator’s knowledge of the program; the potential utilization of results (for example, the interpretation of results may be done within a policy context so that they influence policy makers in future decisions); and, finally, the evaluation’s autonomy and freedom from co-option. Some possible negative results can be avoided. Evaluation should neither collect trivia nor intrude too vigorously on the program. There is always the chance, too, that an evaluation effort will be disregarded by all parties and result in no program change or improvement.

Evaluation is an imperfect art and should be accepted as such. However, the evaluator’s task can be exciting, challenging, and educational. As the number of labor education evaluations increases, so do opportunities for self-development in this exciting, often challenging field.

STAFF EVALUATION FORM (SAMPLE)

1. What were the major general accomplishments of the school for the students? Rank them from 1 to 6 according to the level of accomplishment (1 = the lowest, 6 = the highest): to bring women together; to exchange experiences; to get skill reinforcement; to learn about political and legislative issues; to discuss how to take what they have learned back to their own local unions; to discuss how to help to involve more women in union activity; other.

2. Write your suggestions on recruiting.

3. What suggestions do you have to encourage union leaders to support training programs for women?

4. Please list the workshop, issues, strategies, and rap sessions you led.

5. What pluses and minuses did you see in your group? (Issues, strategies, workshop, rap session.) What would you change?

6. Do you think that reading materials and handouts were effectively used?

7. These were the topics chosen for issues and strategies: Women and Work Change and Stability; Employment and Unemployment; Dealing with Discrimination in the Union; Swing to the Right in Today’s Politics.

a. Were they adequate? Why or why not?

b. What topic do you think was the most useful?

c. What topic do you think was the least useful?

d. If you think of another appropriate topic that we may include for next year’s school please write it down.

8. What do you think of the role of third-year students as teaching assistants?

9. What kind of follow-up do you suggest?

10. Please comment on the evaluation efforts at the school, and make suggestions for improvement in the future.

STUDENT EVALUATION FORM

To help us plan future programs, we will very much appreciate your comments, criticisms, and suggestions. Many thanks.

1. Program components

A. Issues

For each item, circle the number that best corresponds to your opinion.

The most favorable response is 5, and the least favorable response is 1.

B. Workshops

I attended the workshop on:

I chose this workshop because:

How will you use what you learned in the workshop back home?

Would you recommend this workshop to a student who is coming to the school next year? Why? Why not?

C. If you attended a rap session, was it useful to you? Why or why not?

D. The special program I enjoyed the most was: wine and cheese orientation meeting; labor history night; picnic; film (name); other. Why?

E. What I liked best about the school program was

F. What I liked least about the school program was

G. Other kinds of courses and workshops I would have liked

2. Follow-up (interest in further study)

A. Would you be interested in attending an advanced school of this kind next year?

B. After this conference is over, what topics related to the conference would you like to study further?

3. Facilities and services

Were the following physical arrangements adequate?

A. Housing

B. Food services

C. Comments

4. List below changes or suggestions you recommend for future schools:

A. To improve courses

B. Other ideas (to improve program, increase its value to students)

5. Some unions were not represented at this school. Please list names, unions, and addresses (if known) of persons to contact about sending delegates another year.

6. Check the phrase that best describes this year’s school: extremely useful; much use; some use; little use; no use.

7. Comments

BIBLIOGRAPHY

Brophy, Kathleen, Arden Grotelueschen, and Dennis Gooler. A Blueprint for Program Evaluation. Occasional Paper no. 1. Urbana, 111.: University of Illinois, 1974.

Combs, Arthur W. “Humanism, Education and the Future.” Educational Leadership 35 (Jan. 1978): 300–303.

Provus, Malcolm. “Evaluation of Ongoing Programs in the Public School Systems.” The Sixty-Eight Yearbook of the National Society for the Study of Education. Chicago: The University of Chicago Press, 1969. Part II, pp. 242–83.

Scriven, Michael. “The Methodology of Evaluation.” In R. E. Stake, ed., Curriculum Evaluation. American Educational Research Association Monograph Series on Evaluation, no. 1. Chicago: Rand McNally, 1967.

Stufflebeam, Daniel L. Evaluation as Enlightenment for Decision-Making. Columbus, O.: Evaluation Center, Ohio State University, 1968.

Tyler, Ralph W. “General Statement on Evaluation” journal of Education Research 35 (1942): 492–501.

EVALUATION RESOURCES

The resource centers listed can provide interested educators with further information on evaluation. Some deal with educational evaluation in general, while others are specific to labor education programs.

Bureau of Educational Research and Service, Box U-4, University of Connecticut, Storrs, Conn. 06268.

Center for Instructional Research and Curriculum Evaluation, University of Illinois, Urbana, 111. 61801.

College of Education, Western Michigan University, Kalamazoo, Mich. 49008.

Evaluation Institute, Campion Hall, University of San Francisco, San Francisco, Cal. 94117.

ICES Item Catalog, Instructors and Course Evaluation System. Office of Instructional Resources, Measurement and Research Division, University of Illinois at Urbana. Newsletter #1, 1977. Contains sample questions for use in teacher evaluation forms.

Measurement Services Center, University of Minnesota, Minneapolis, Minn. 55455.

Chicago Labor Education Program, 1315 SEO Building, RO. Box 4348, Chicago, 111. 60860.

Labor Education Center, U-13, University of Connecticut, Storrs, Conn. 06268.

NOTES

1. Keith Knauss, “Evaluating Non-Credit Labor Education Programs: Practices and Problems,” Labor Education Viewpoints (n.d.), Local 189.

2. Blaine R. Worthen and James R. Sanders, eds., Educational Evaluation: Theory and Practice (Worthington, O.: Charles A. Jones, 1973).

3. Lee J. Cronbach, “Course Improvement through Evaluation,” Teachers College Record 64 (1963): 672–83.

4. Arthur L. Costa, “Affective Education: The State of the Art,” Educational Leadership 34 (Jan. 1977), 261.

5. Arden D. Grotelueschen, Dennis Gooler, and Alan B. Knox, Evaluation in Adult Basic Education: How and Why? (Danville, 111.: Interstate, 1976), p. 9.

6. Laverne B. Forest, Adult Education Forum 26 (Jan. 1976): 174.

7. Robert E. Stake, “The Countenance of Educational Evaluation,” Teachers College Record 68 (1967): 523–40.

8. Carol H. Weiss, Evaluation Research (Englewood Cliffs, N.J.: Prentice-Hall, 1972).

Additional Information

ISBN
9781439918043
MARC Record
OCLC
1049427356
Pages
98-108
Launched on MUSE
2018-08-23
Language
English
Open Access
Yes
Creative Commons
CC-BY-NC-ND
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.