Abstract

American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial expression exist, and those rely mainly on descriptions and comparisons of individual sentences. The purpose of this article is to present a novel and multilevel approach for the coding and quantification of ASL facial expressions. The technique combines ASL coding software with novel postcoding analyses that allow for graphic depictions and group comparisons of the different facial expression types. This system enables us to clearly delineate differences in the production of otherwise similar facial expression types.

pdf