Compensatory cross-modal effects of sentence context on visual word recognition in adults

2021 
Reading involves mapping combinations of a learned visual code (letters) onto meaning. Previous studies have shown that when visual word recognition is challenged by visual degradation, one way to mitigate these negative effects is to provide "top–down" contextual support through a written congruent sentence context. Crowding is a naturally occurring visual phenomenon that impairs object recognition and also affects the recognition of written stimuli during reading. Thus, access to a supporting semantic context via a written text is vulnerable to the detrimental impact of crowding on letters and words. Here, we suggest that an auditory sentence context may provide an alternative source of semantic information that is not influenced by crowding, thus providing “top–down” support cross-modally. The goal of the current study was to investigate whether adult readers can cross-modally compensate for crowding in visual word recognition using an auditory sentence context. The results show a significant cross-modal interaction between the congruency of the auditory sentence context and visual crowding, suggesting that interactions can occur across multiple levels of processing and across different modalities to support reading processes. These findings highlight the need for reading models to specify in greater detail how top–down, cross-modal and interactive mechanisms may allow readers to compensate for deficiencies at early stages of visual processing.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    72
    References
    0
    Citations
    NaN
    KQI
    []