Abstract

Deaf children build their speech concepts primarily on what they see. Some structures such as the lips, jaws, and front of the tongue are readily visible. The actions of these structures in the speech of the deaf are more similar to those of children with normal hearing than are the actions of the less visible or nonvisible structures. The hearing child, however, can listen to the acoustic pattern of what he or she says and compare that with what other speakers say. He or she can then adjust his or her speaking activities to bring the sounds closer and closer to those of other talkers. By this means the child builds an accurate acoustics-to-articulation action schema and masters articulator motor control. In the process of learning to talk the hearing child gains control over the movements needed to talk.

The deaf child has only half of the equation: The deaf child can feel his or her movements and can see some of the actions as others talk. A sensory link between the two sets of information is missing. Unseen motor behaviors of speech are inaccessible. The child has little opportunity to master the full range of speech actions. Careful documentation of what deaf children actually do in their efforts to utter specific sounds and their ability to produce nonspeech movements similar to those in speech is lacking.

The goal of this research has been to provide a means to study deaf speech and to add the other half of the sensory equation: a visual-vocal linkage that parallels the auditory-vocal channel of hearing children.

To meet this goal a computer-based dynamic orometer was developed which (a) documents simultaneous lip and jaw positioning, tongue-palate contacts, tongue shape and positioning with the mouth, and changes in voice frequency and intensity as children talk; (b) studies motor control of the lips, jaws, tongue, and larynx in nonspeech activities that span ranges of movements found in speech; and (c) provides side-by-side video displays showing the actual movements of the articulators as a hearing and a deaf child speak. This provides an efficient and effective dual visual-vocal means for modeling and shaping the speech of deaf children. In this symposium, videotapes will be presented that show the system, demonstrate its use, and portray some of the exciting results attained.

pdf

Share