Learning Separable Time-Frequency Filterbanks for Audio Classification

2021 
State-of-the-art audio classification systems often apply deep neural networks on hand-crafted features (e.g., spectrogram-based representations), instead of learning features directly from raw audio. Moreover, these audio networks have millions of unknown parameters need to be learned, which causes a great demand for computational resources and training data. In this paper, we aim to learn audio representations directly from raw audio, and at the same time mitigate its training burden by employing a light-weight architecture. In particular, we propose to learn separable filters, parametrized with only a few variables, namely center frequency and bandwidth, facilitating training and offering interpretability of learned representations. The generality of the proposed method is demonstrated by applying it onto two applications, namely 1) speaker identification and 2) acoustic event recognition. Experimental results indicate its effectiveness on these applications, especially when small amount of training data is available.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    0
    Citations
    NaN
    KQI
    []