Maximum F1-Score Discriminative Training Criterion for Automatic Mispronunciation Detection

2015 
We carry out an in-depth investigation on a newly proposed Maximum F1-score Criterion (MFC) discriminative training objective function for Goodness of Pronunciation (GOP) based automatic mispronunciation detection that makes use of Gaussian Mixture Model-hidden Markov model (GMM-HMM) as acoustic models. The formulation of MFC seeks to directly optimize F1-score by converting the non-differentiable F1-score function into a continuous objective function to facilitate optimization. We present model-space training algorithm according to MFC using extended Baum-Welch form like update equations based on the weak-sense auxiliary function method. We then present MFC based feature-space discriminative training. We train a matrix projecting from posteriors of Gaussians to a normal size feature space, and add the projected features to traditional spectral features. Mispronunciation detection experiments show MFC based model-space training and feature-space training are effective in improving F1-score and other commonly used evaluation metrics. It is also shown MFC training in both the feature-space and model-space outperforms either model-space training or feature-space training alone, and is about 11.6% better than the maximum likelihood (ML) trained baseline in terms of F1-score. Further, we review and compare mispronunciation detection results with the use of MFC and some traditional training criteria that minimize word error rate in speech recognition. The experimental analysis and comparison provide useful insight into the correlations between F1-score maximization and optimization of these training criteria.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    0
    Citations
    NaN
    KQI
    []