In lieu of an abstract, here is a brief excerpt of the content:

  • Measure of Development for Student Conduct Administration
  • Adam Ross Nelson (bio)

Student Conduct Administration (SCA) is one of many names for the processes and procedures through which colleges and universities manage student behavior. Other common references include codes of conduct, honor codes, judicial systems, and judicial services (Pavela, 2005; Stoner, 2008). Staff who administer SCA processes have a dynamic set of responsibilities. Among those responsibilities, administrators believe, is to provide for students a developmental experience.

Despite the accessibility of quasi-experimental design (QED) in the study of education (Schlotter, Schwerdt, & Woessman, 2011), the existing scholarship has yet to generate strong empirical evidence that SCA processes help students develop. To address the lack of evidence, I aimed to propose a new instrument suitable for QED study. Previous instruments were not designed to collect data from comparison groups, which has limited previous studies to descriptive analysis (King, 2012; Mullane, 1999; Stimpson & Janosik, 2011, 2015).

Previous instruments were unable to gather responses from comparison groups because they assumed all respondents have violated a policy, participated in an SCA process, or both: for example, the item, “My involvement in the disciplinary process will help me avoid further policy violations” (Mullane, 1999, p. 85). A comparison group, by definition, would include respondents who have not participated in an SCA process, meaning items based on such an assumption would not apply. Likewise, while some students in a comparison group may have violated a policy, they may not have participated in an [End Page 1274] SCA. Data from a comparison group is an essential aspect of QED.

METHOD

The search to identify the development that might occur as a result of SCA processes included review of publications in two strands of literature. The first strand included empirical studies of development such as, but not limited to, Howell (2005), Karp and Sacks (2014), King (2012), Mullane (1999), and Stimpson and Janosik (2011, 2015). The second strand included peer-reviewed, but not empirical, publications such as, but not limited to, Boots (1987), Emmanuel and Miser (1987), Gehring (2001), and Lancaster (2012). This review revealed no consistently stated and measurable conception of development. At best, scholars agree students should develop by changing in some manner, and they should refrain from future misbehavior. My aim with this study was not to devise a measure of every possible developmental outcome thought to be associated with SCA processes. To keep this project within manageable proportions three constructs served as guides: first, the manner in which students evaluate rules; second, the manner in which students evaluate how to behave; and third, the manner in which students think about the risks associated with alcohol consumption. Given the high rates of high-risk alcohol consumption among college students, it is important to measure one or more aspects of alcohol-related behavior.

ITEM CONSTRUCTION & EXPERT PANEL REVIEW

Guided by literature, I and a panel of experts (research scholars and current or former SCA professionals) drafted, reviewed, revised, discussed, considered, and analyzed a total of 106 items, resulting in a final set of 38 items. Item responses ranged from 1 (does not describe me at all) to 5 (describes me greatly). This identity scale was intended to measure changes in self-identification. These 38 items were administered via e-mail in early Fall 2016 at 4 separate 4-year, state-supported institutions: 3 located in the American Midwest and 1 in the American Northeast. Undergraduate enrollment ranged from just under 5,000 to just under 30,000. Initially 17,176 first- and second-year students were invited to participate. There was no incentive to respond. There were 1,341 students—representing 40 states as well as countries abroad—who provided a complete set of responses to all 38 items for a response rate of 7.8% (at Institutions A, B, and D the response rates exceeded 8.0%, while at Institution C the response rate was just under 3.0%). Respondents were 61% female, 37% male,


Click for larger view
View full resolution
Table 1.

Composite Score Summary With Cronbach’s Reliability Coefficient

[End Page 1275]


Click for larger view
View full resolution
Table 2.

Item Factor Loading Scores With Mean and Standard Deviation

[End Page 1276]

and...

pdf

Share