Predictive learning extracts latent space representations from sensory observations

2019 
Neural networks have achieved many recent successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task9s low-dimensional latent structure in the network activity - i.e., in the learned neural representations. Similarly, biological neural circuits and in particular the hippocampus may produce representations that organize semantically related episodes. Here, we investigate the hypothesis that representations with low-dimensional latent structure, reflecting such semantic organization, result from learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations in a simulated spatial navigation task, we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that capture the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality which highlight the importance of the predictive aspect of neural representations, and provide mathematical arguments for when and why these representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    54
    References
    7
    Citations
    NaN
    KQI
    []