Efficient Text Classification with Echo State Networks

2021 
We consider echo state networks (ESNs) for text classification. More specifically, we investigate the learning capabilities of ESNs with pre-trained word embedding as input features, trained on the IMDb and TREC sentiment and question classification datasets, respectively. First, we introduce a customized training paradigm for the processing of multiple input time series (the inputs texts) associated with categorical targets (their corresponding classes). For sentiment tasks, we use an additional frozen attention mechanism which is based on an external lexicon, and hence requires only negligible computational cost. Within this paradigm, ESNs can be trained in tens of seconds on a GPU. We show that ESNs significantly outperform their Ridge regression baselines provided with the same embedded features. ESNs also compete with classical Bi-LSTM networks while keeping a training time of up to 23 times faster. These results show that ESNs can be considered as robust, efficient and fast candidates for text classification tasks. Overall, this study falls within the context of light and fast-to-train models for NLP.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    1
    Citations
    NaN
    KQI
    []