Many ways to see your feelings: Successful facial expression recognition occurs with diverse patterns of fixation distributions.

2020 
Facial expression recognition relies on the processing of diagnostic information from different facial regions. For example, successful recognition of anger versus disgust requires one to process information located in the eye/brow region, or in the mouth/nose region, respectively. Yet, how this information is extracted from the face is less clear. One widespread view, supported by cross-cultural experiments as well as neuropsychological case studies, is that the distribution of gaze fixations on specific diagnostic regions plays a critical role in the extraction of affective information. According to this view, emotion recognition is strongly related to the distribution of fixations to diagnostic regions. Alternatively, facial expression recognition may not rely merely on the exact patterns of fixations, but rather on other factors such as the processing of extrafoveal information. In the present study, we examined this matter by characterizing and using individual differences in fixation distributions during facial expression recognition. We revealed 4 groups of observers that differed in their distribution of fixations toward face regions in a robust and consistent manner. In line with previous studies, we found that different facial emotion categories evoked distinct distribution of fixations according to their diagnostic facial regions. However, individual distinctive patterns of fixations were not correlated with emotion recognition: individuals that strongly focused on the eyes, or on the mouth, achieved comparable emotion recognition accuracy. These findings suggest that extrafoveal processing may play a larger role in emotion recognition from faces than previously assumed. Consequently, successful emotion recognition can rise from diverse patterns of fixations. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    4
    Citations
    NaN
    KQI
    []