language-icon Old Web
English
Sign In

Deep Memory Update

2021 
Recurrent neural networks are key tools for sequential data processing. Existing architectures support only a limited class of operations that these networks can apply to their memory state. In this paper, we address this issue and introduce a recurrent neural module called Deep Memory Update (DMU). This module is an alternative to well-established LSTM and GRU. However, it uses a universal function approximator to process its lagged memory state. In addition, the module normalizes the lagged memory to avoid gradient exploding or vanishing in backpropagation through time. The subnetwork that transforms the memory state of DMU can be arbitrary. Experimental results presented here confirm that the previously mentioned properties of the network allow it to compete with and often outperform state-of-the-art architectures such as LSTM and GRU.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    0
    Citations
    NaN
    KQI
    []