We will write a custom Critical Writing on Speech Motor Learning in Profoundly Deaf Adults specifically for you
807 certified writers online
It is a well established fact that speech parameters are regulated acoustically and are governed by the auditory feedback. Sound waves strike the tympanic membrane producing vibrations that are gathered by middle ear, then to cochlea (inner ear), and transmitted by auditory nerves to brain. This, in turn, regulates the speech related motor nerves. More recently, an alternative and complementary hypothesis has emerged and validated, suggesting that proprioception type somatosensory inputs, related to speech-related motions of jaws, tongue and lips, play a major role in speech production. This system informs the brain of the openness of jaws and curvatures, and the positional change in lips and tongue. It has been observed that hearing-impaired people exhibit better perception and capacity for intelligent speech than normal persons with full hearing capabilities. Speech related activity in absence of acoustic response and impaired auditory system had led scientists to believe that an alternate perception system continues to function in speech production in hearing-impaired individuals. Somatosensory basis of speech production was first researched by Tremblay, Shiller and Ostry, who created mechanical perturbations to the jaw using a specialized robotic system (866-869). The motion path of the jaw was so finely tuned with specific utterances of words, that it stimulated the speech-associated somatosensory system. In the absence of vocalized speech for hearing a complex mechanical load altered the jaw motion such that as if consonants and vowels in defined words were to be uttered. The somatosensory feedback immediately stimulated the jaw action in coherence with the utterance of the words, for which jaw movements were designed. In this study, speech quality was compared among the subjects, who were given mechanical and acoustic inputs simultaneously (controls), or was given only the correct harmonic voiceless mechanical input. The subjects were trained to repeatedly spell those words which had a consonant, “s”, and then a vowel, like “saw”. It was found that without audio input, just the mechanical perturbation was sufficient to allow speech in the subjects and correct words were spelled. In the present paper of Nasir and Ostry, the work was extended to the subjects who had profound post-lingual hearing loss in both ears (1217-1222). The objective was that permanent auditory defects may activate the alternate somatosensory system in these subjects and they can be trained to speak using mechanical device that controls jaw motion.
In the age group of 55-70 years, five persons with average hearing loss due to post-lingual deafness, of which four had cochlear implants and one had hearing aid, and six normal hearing subjects (controls) were subjected to utterance in 12 attempts of a single word in which single consonant, “s”, was followed by a vowel or a diphthong. Utterance of “s” involves closure of jaw, which follows large movement of the jaw due to vowel spell. These articulated changes were created using robotic device. Jaw movements were recorded in three dimensions, and jaw trajectories were plotted fitting with simulated jaw motions related to that spell. Any correction due to subject’s own effort led to deviation in jaw motion from what has been applied using the device. This was considered as training, by which the subjects were offsetting the mechanical pressure. This stimulus eventually enabled the subjects to speak, and voice frequencies were recorded using speech spectrogram to maneuver whether the subject is correctly uttering the word for which the mechanical stimulus was given. Standard statistical analyses were done for determining the level of significance.
Review of the work
Jaw movements for speech in hearing-impaired subjects with hearing aid or implant at on or off position, and in controls, were quite similar. The mechanical targeting of certain points in jaws to produce consonants and vowels indeed stimulated the somatosensory system and this, in turn, activated the speech motor nerves to correctly move the jaws for a particular type of utterance. The hearing- impaired subjects in the absence of any hearing aid were immediately trained to move the jaws exactly like the control subjects to spell the words. Further, it was found that both groups of the subjects learned to compensate for mechanical interference, thereby “correcting” the somatosensory feedback for speech. These adaptations were found regardless of whether hearing aid/implant was on or off. In controls the adaptation was driven by somatostimulatory and auditory feedback but for the deaf group without hearing aid, there was no role of auditory input in speech learning. The acoustic signals of test utterances especially the sound frequency change from consonant to vowel were recorded and evaluated. In the course of investigation, the quantitative data on speech was found to be better in hearing-impaired persons than acoustically active controls, or in other words, former group was better adapted to the somatostimulatory impulses. The authors were able to dissociate a role of auditory and somatosensory feedback on speech learning. There were limitations in the study especially with subject selection. In controls, one third responded to both somatosensory and auditory while the rest only to the former signals, so there was variation within one group itself. If the sample size were increased to n = 15-20, it would have been possible to differentiate this effect. Further, controls were not given non-acoustic mechanical stimulus so that their somatosensory ability could have been recorded in absence of auditory signals. It is known that change in vocal area also stimulates auditory-sensitive tongue surface, and mechanical device might have brought this change, which was thought to be solely due to jaw motion. It would have been better had there been more hearing-impaired subjects depending on hearing aid (n = 5-6), distinct from the implanted subjects (n = 5-6).
The later group in absence of acoustic signals would solely depend on somatosensory, whereas the former would rely on both somatosensory and the jaw motion-related skin auditory response due to change of vocal area. As such the two groups of hearing-impaired subjects have different perception to receive sound signals, either through air- or through bone-mediated vibrations in middle ear. When sound devices are switched off the former can still get week sound signals (low hearing) even if somatosensory system is not activated, but the implanted subjects would not (no hearing). Another important consideration is that, more age groups should have been considered than only taking the old people. It is because with age speech adaptation capabilities can get retarded as happens with other sensory systems. Assuming that with age somatosensory perception is more affected than auditory one, the aforementioned segregation of the two sensory systems would be more pronounced in the younger group. This is why the results were not matching with controls of earlier studies, in which younger individuals were taken as subjects. It would have been better to take two age groups, 35-45 and 55-65 years. The somatosensory response of other signals should have also be tested. With hearing normal and with hearing-impaired subjects with hearing aid switched on, the responsiveness of other somatosensory systems, like limb movement, pain sensation etc. should have been evaluated. This data would nullify the overall responsiveness in somatosensory feedback with age in adaptation to speech learning process. Thus, the relative strengths of auditory and somatosensory responses with age can be worked out when both systems are simultaneously used for speech production. The knee jerk response could be an independent test to evaluate the overall change in somatosensory response with age and with hearing disability. Another ideal group could have been congenital or pre-lingual deaf persons. Here, no device is used for hearing, and body actions are the primary communication medium. In this group, even somatosensory perception mechanism would have failed, or maybe not, if some other sensory mechanism like visual sensation governs the speech process. A novel training program can be devised using the same robotic system by mechanically maneuvering the jaw motion and coordinating this with the body action of the trainer, which the subject understands. If speech associated somatosensory feedback system operates here, then they might be able to speak in response to the trainer’s body action.
Nasir, Sazzad M., and David J. Ostry, “Speech motor learning in profoundly deaf adults.” Nature Neuroscience (2008): 1217-1222.
Tremblay, Stephanie, Douglas M. Shiller, and David J. Ostry, “Somatosensory basis of speech production.” Nature 423 (2003): 866-869.