A spiking network model for semantic representation and replay-based association acquisition

2021 
The ability to form and store several types of associations between representations of natural images is an area of ongoing research in artificial deep neural networks, which may be informed by biologically-inspired computational models. It is hypothesized that replay of sensory stimuli through cortical-hippocampal connections is responsible for training associations between events, as a powerful form of associative learning. While models of associative memories and sensory processing have been studied extensively, there is a potential for spiking models encompassing sensory processing, reasoning over associations, and learning representations which has not been previously demonstrated. Such networks would be suitable for reasoning and learning from visual data on neuromorphic hardware. In this work, we demonstrate a novel visual reasoning network capable of representing semantic relationships and learning new associations through replay-based association with spiking models using natural images. This is demonstrated through associations of natural images from Tiny Imagenet with a knowledge graph derived from WordNet, and we show that relations in the knowledge graph can be accurately traversed for multiple sequential queries. We also demonstrate learning of a novel association after replayed presentations of natural images. This represents a novel capability for machine learning and reasoning with spiking neural networks which may be amenable to neuromorphic hardware.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []