Vocal Acoustic and Facial Action Features in Cross-cultural Emotion Expression

2009 
1. Abstract A cross-cultural emotion perception experiment was conducted using vocal-facial emotional stimuli with the seven emotions generated by two Chinese and two Japanese speakers. Forty Chinese and forty Japanese subjects were requested to rate the emotion states, facial action and speech acoustic features of each stimulus. Results of MDS and regression analysis showed that consistent perceptual patterns exist in the two cultural backgrounds for video-only and audio-only stimuli. For both Japanese and Chinese, the subjects could capture more information in both acoustic and facial features from the native materials than from non-native materials even if the materials sans linguistic information. Language learning effects were observed in the cross-language perception, that is, the perceptual results of language learners were suited between the non-language learners and the native speakers. Japanese show more confidence to identify vocal acoustic features i.e. giving higher rating scores, while Chinese exhibit more confidence to identify facial action features.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    1
    References
    1
    Citations
    NaN
    KQI
    []