INSTANTANEOUS FREQUENCY FILTER-BANK FEATURES FOR LOW RESOURCE SPEECH RECOGNITION USING DEEP RECURRENT ARCHITECTURES

2021 
Recurrent neural networks (RNNs) and its variants have achieved significant success in speech recognition. Long short term memory (LSTM) and gated recurrent units (GRUs) are the two most popular variants which overcome the vanishing gradient problem of RNNs and also learn effectively long term dependencies. Light gated recurrent units (Li-GRUs) are more compact versions of standard GRUs. Li-GRUs have been shown to provide better recognition accuracy with significantly faster training. These different RNN inspired architectures invariably use magnitude based features and the phase information is generally ignored. We propose to incorporate the features derived from the analytic phase of the speech signals for speech recognition using these RNN variants. Instantaneous frequency filter-bank (IFFB) features derived from Fourier transform relations performed at par with the standard MFCC features for recurrent units based acoustic models despite being derived from phase information only. Different system combinations of IFFB features with the magnitude based features provided lowest PER of 12.9% and showed relative improvements of up to 16.8% over standalone MFCC features on TIMIT phone recognition using Li-GRU based architecture. IFFB features significantly outperformed the modified group delay coefficients (MGDC) features in all our experiments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    0
    Citations
    NaN
    KQI
    []