language-icon Old Web
English
Sign In

Phonological development

Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth. Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth. Sound is at the beginning of language learning. Children have to learn to distinguish different sounds and to segment the speech stream they are exposed to into units – eventually meaningful units – in order to acquire words and sentences. Here is one reason that speech segmentation is challenging: When you read, there are spaces between the words. No such spaces occur between spoken words. So, if an infant hears the sound sequence “thisisacup,” it has to learn to segment this stream into the distinct units “this”, “is”, “a”, and “cup.” Once the child is able to extract the sequence “cup” from the speech stream it has to assign a meaning to this word. Furthermore, the child has to be able to distinguish the sequence “cup” from “cub” in order to learn that these are two distinct words with different meanings. Finally, the child has to learn to produce these words. The acquisition of native language phonology begins in the womb and isn’t completely adult-like until the teenage years. Perceptual abilities (such as being able to segment “thisisacup” into four individual word units) usually precede production and thus aid the development of speech production. Children do not utter their first words until they are about 1 year old, but already at birth they can tell some utterances in their native language from utterances in languages with different prosodic features. Infants as young as 1 month perceive some speech sounds as speech categories (they display categorical perception of speech). For example, the sounds /b/ and /p/ differ in the amount of breathiness that follows the opening of the lips. Using a computer generated continuum in breathiness between /b/ and /p/, Eimas et al. (1971) showed that English-learning infants paid more attention to differences near the boundary between /b/ and /p/ than to equal-sized differences within the /b/-category or within the /p/-category. Their measure, monitoring infant sucking-rate, became a major experimental method for studying infant speech perception. Infants up to 10–12 months can distinguish not only native sounds but also nonnative contrasts. Older children and adults lose the ability to discriminate some nonnative contrasts. Thus, it seems that exposure to one’s native language causes the perceptual system to be restructured. The restructuring reflects the system of contrasts in the native language. At four months infants still prefer infant-directed speech to adult-directed speech. Whereas 1-month-olds only exhibit this preference if the full speech signal is played to them, 4-month-old infants prefer infant-directed speech even when just the pitch contours are played. This shows that between 1 and 4 months of age, infants improve in tracking the suprasegmental information in the speech directed at them. By 4 months, finally, infants have learned which features they have to pay attention to at the suprasegmental level. Babies prefer to hear their own name to similar-sounding words. It is possible that they have associated the meaning “me” with their name, although it is also possible that they simply recognize the form because of its high frequency. With increasing exposure to the ambient language, infants learn not to pay attention to sound distinctions that are not meaningful in their native language, e.g., two acoustically different versions of the vowel /i/ that simply differ because of inter-speaker variability. By 6 months of age infants have learned to treat acoustically different sounds that are representations of the same sound category, such as an /i/ spoken by a male versus a female speaker, as members of the same phonological category /i/. Infants are able to extract meaningful distinctions in the language they are exposed to from statistical properties of that language. For example, if English-learning infants are exposed to a prevoiced /d/ to voiceless unaspirated /t/ continuum (similar to the /d/ - /t/ distinction in Spanish) with the majority of the tokens occurring near the endpoints of the continuum, i.e., showing extreme prevoicing versus long voice onset times (bimodal distribution) they are better at discriminating these sounds than infants who are exposed primarily to tokens from the center of the continuum (unimodal distribution).

[ "Phonology" ]
Parent Topic
Child Topic
    No Parent Topic