language-icon Old Web
English
Sign In

Cued speech

Cued Speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues (representing consonants), in different locations near the mouth (representing vowels) to convey spoken language in a visual format. The National Cued Speech Association defines Cued Speech as '...a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other.' It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is different from American Sign Language (ASL), which is a separate language from English. Cued Speech is considered a communication modality, but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development. Cued Speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues (representing consonants), in different locations near the mouth (representing vowels) to convey spoken language in a visual format. The National Cued Speech Association defines Cued Speech as '...a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other.' It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is different from American Sign Language (ASL), which is a separate language from English. Cued Speech is considered a communication modality, but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development. Cued Speech was invented in 1966 by R. Orin Cornett at Gallaudet College, Washington, D.C. After discovering that children with prelingual and profound hearing impairments typically have poor reading comprehension, he developed the system with the aim of improving the reading abilities of such children through better comprehension of the phonemes of English. At the time, some were arguing that deaf children were earning these lower marks because they had to learn two different systems: American Sign Language (ASL) for person-to-person communication and English for reading and writing. As many sounds look identical on the lips (such as /p/ and /b/), the hand signals introduce a visual contrast in place of the formerly acoustic contrast. Cued Speech may also help people hearing incomplete or distorted sound—according to the National Cued Speech Association at cuedspeech.org, 'cochlear implants and Cued Speech are powerful partners'. Since cued speech is based on making sounds visible to the hearing impaired, cued speech is not limited to use in English speaking nations. Because of the demand for use in other languages/countries, by 1994 Cornett had adapted cueing to 25 other languages and dialects. Originally designed to represent American English, the system was adapted to French in 1977. As of 2005, Cued Speech has been adapted to approximately 60 languages and dialects, including six dialects of English. For tonal languages such as Thai, the tone is indicated by inclination and movement of the hand. For English, Cued Speech uses eight different hand shapes and four different positions around the mouth. Though to a hearing person, Cued Speech may look similar to signing, Cued Speech is not a sign language; nor is it a Manually Coded Sign System for a spoken language. Rather, Cued Speech is a manual modality of communication for representing any language at the phonological level (phonetics). A manual cue in cued speech consists of two components: hand shape and hand position relative to the face. Hand shapes distinguish consonants and hand positions distinguish vowel. A hand shape and a hand position (a 'cue') together with the accompanying mouthshape, makes up a CV unit - a basic syllable. Cuedspeech.org lists 64 different dialects that CS has been adapted to. Each language takes on CS by looking through the catalog of the language's phonemes and distinguishing which phonemes appear similar when pronounced and thus need a hand sign to differentiate them. Cued Speech is based on the hypothesis that if all the sounds in the spoken language looked clearly different from each other on the lips of the speaker, those hearing impaired would learn a language in much the same way as a hearing person, but through vision rather than audition.

[ "Communication", "Social psychology", "Developmental psychology", "Cognitive psychology", "Linguistics", "Inhibition of return" ]
Parent Topic
Child Topic
    No Parent Topic