When You CAN See the Difference: The Phonetic Basis of Sonority in American Sign Language

2020 
Spoken and signed languages (SL) deliver perceptual cues which exhibit various degrees of perceptual validity during categorization: In spoken languages, listeners develop perceptual biases when integrating multiple acoustic dimensions during auditory categorization (Holt & Lotto, 2006). This leads us to expect differential perceptual validity for dynamic gestural units HANDSHAPE, MOVEMENT, ORIENTATION, and LOCATION produced by manual articulators in SLs. In this study, we use a closed-set sentence discrimination task developed by Bochner et al. (2011) to evaluate the perceptual saliency of the gestural components of signs in American Sign Language (ASL) for naive signers and deaf L2 learners of ASL proficient in another SL. Our goal is to gauge which of these features are likely to present the phonetic basis of sonority in sign modality and relay phonemic contrasts perceptible for even first-time signers. 25 deaf L2 ASL signers and 28 hearing English speakers with no experience in any SL participated in this study. Results reveal that phonemic contrasts based on HANDSHAPE presented an area of maximum difficulty in phonological discrimination for sign-naive participants. For all participants, contrasts based on ORIENTATION and LOCATION and involving larger scale articulators, were associated with robust categorical discrimination.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    1
    Citations
    NaN
    KQI
    []