In lieu of an abstract, here is a brief excerpt of the content:

Clinical and Mechanical Decision Processes During the Selection Interview: Impact on Reliability Shimon DOLAN et Diane ROCHON The selection interview has been a major concern for researchers as well as practicing personnel officers for a number of reasons. First, with the significant increase in EEO legislation and court precedence in the U.S., and similar trends in Canada via the federal and provincial Human rights Acts, companies are required to justify the means they use to make staffing decisions. Validity and reliability of the selection instruments provide the core for such demonstration (Dolan and Schuler, in press). Second, a closely related issue has to do with the ever increasing costs of making selection decisions based on inaccurate (unreliable) and non-valid procedures (Janz, in press A). The selection interview is still considered to be the essence of the hiring decision (Dolan and Schuler, in press; Arvey and Camion, 1985). In fact, some researchers claim that interviews fall second in frequency of use only to application blanks (McDaniel and Schmidt, 1985; Janz, in press B). Yet, the continued popularity of the selection interview by practitioners is surprising in light of increased evidence that it is neither reliable (i.e. not consistent in prediction) nor valid (See reviews by: Mayfield, 1964; Wright, 1969; Landy, 1976; Cascio, 1981 and Janz, in press B). The present study was concerned with the effects of the “type of interview” on the raters’ internal consistency (reliability) in assessing job candidates. Consistency is viewed as necessary (but not sufficient) if interviewer ratings are to be reliable. As stated by Aiken (1979): “A measurement is considered to be reliable if it is free of errors or if it is consistent under conditions that might introduce error”. By “errors” we mean any factors that cause a persons’ obtained score to deviate from his or her true score. By “consistency” we mean the stability or dependability of persons’ scores over time. Since both “consistency” and “errors” can deter the reliability of the interview score, we will first provide a brief summary of some of the typical problems (biases) cited in published research. Throughout the years, researchers found the typical interview to entail a number of reliability problems as interviewers decisions were influenced (biased) by the following factors: • First impressions (the McGill studies, Webster 1964); • Biases such as the Halo Effect, the contrast effect, and others (see: Dolan and Schuler, in press; Dolan and Roy, 1982; Schnieder, 1976); • Personal feelings about the kind of characteristics that lead to success on the job (Cascio, 1981, p. 191); • Training of the interviewers (McMurray, 1947); • Various personal biases of the interviewers regarding the sex, race and other socio-demographic characteristics of the candidate (Dipboye, 1985). Given these problems, until recently, the literature shows very little support for the use of the interview in selection. However, research is beginning to indicate that the interview reliability (and validity) might be significantly improved if it is conducted along a specific set of guidelines providing more structure to the process (Clowers and Fraser, 1986; Goodale, 1979; Vance et al., 1978; Anstey, 1977). All the studies suggest that structured interviews are more valuable than less structured one. In an unstructured interview (hereafter, clinical) questions are not prepared in advance, and results of the assessment lead to differences from one rater to another and from one candidate to another with the same rater. This type of interview requires minimal preparation from the interviewer and it basically corresponds to a clinical approach. On the opposite pole is the structured interview which require a series of predetermined specific questions to be asked for each candidate (in the same order). This corresponds to a mechanical approach. The semi-structured interview is somewhere between these two extremes. While similar topics or questions are discussed with each candidate, candidates are free to add (or elaborate) as they wish. Several researchers advocate the use of the semi-structured interview, hence it enhances the richness of information gathering (clinical approach) yet it is carried to unveil the more systematic job-relatedness information (mechanical approach). Nonetheless, some trade-offs will always exist between richness which might increase validity but decrease reliability, and mechanical gathering which may reduce validity but increase reliability (Sawyer, 1966; Campbell et al., 1970). This study focuses on comparing two decision making processes, clinical and mechanical, during a series of semi-structured interviews. It represents a preliminary attempt to close a gap by investigating the effects of two selection processes (clinical and mechanical) on interviewer reliability...

Share