Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion.

2019 
Despite the widespread use of supervised deep learning methods for affect recognition from speech, they are severely limited by the lack of a sufficient amount of labelled speech data. Considering the abundant availability of unlabelled data, this paper proposed a semi-supervised model that can effectively utilise the unlabelled data in multi-task learning way in order to improve the performance of speech emotion recognition. The proposed model adversarialy learns a shared representation for two auxiliary tasks along with emotion identification as the main task. We consider speaker and gender identification as auxiliary tasks in order to operate the model on any large audio corpus. We demonstrate that in a scenario with limited labelled training samples, one can significantly improve the performance of a supervised classification task by simultaneously training with additional auxiliary tasks having an availability of large amount of data. The proposed model is rigorously evaluated for both categorical and dimensional emotion classification tasks. Experimental results demonstrate that the proposed model achieves state-of-the-art performance on two publicly available datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    109
    References
    4
    Citations
    NaN
    KQI
    []