language-icon Old Web
English
Sign In

Predictive coding

Predictive coding is a theory of cognition in which the brain is constantly generating and updating a mental model of sensory input. The model is broadcast through the network of sensory processing brain regions. In each region, the model being propagated is compared to the sensory input and if they do not match, a Prediction Error is sent back up the network and the model is revised.“...prediction neurons... in deep layers of agranular cortex drive active inference by sending sensory predictions via projections ...to supragranular layers of dysgranular and granular sensory cortices. Prediction-error neurons ….in the supragranular layers of granular cortex compute the difference between the predicted and received sensory signal, and send prediction-error signals via projections...back to the deep layers of agranular cortical regions. Precision cells … tune the gain on predictions and prediction error dynamically, thereby giving these signals reduced (or, in some cases, greater) weight depending on the relative confidence in the descending predictions or the reliability of incoming sensory signals.” (Barrett & Simmons, 2015) Predictive coding is a theory of cognition in which the brain is constantly generating and updating a mental model of sensory input. The model is broadcast through the network of sensory processing brain regions. In each region, the model being propagated is compared to the sensory input and if they do not match, a Prediction Error is sent back up the network and the model is revised. Theoretical ancestors to predictive coding date back as early as 1860 with Helmholtz’s concept of unconscious inference. Unconscious inference refers to the idea that the human brain fills in visual information to make sense of a scene. For example, if something is relatively smaller than another object in the visual field, the brain uses that information as a likely cue of depth, such that the perceiver ultimately (and involuntarily) experiences depth. The understanding of perception as the interaction between sensory stimuli (bottom-up) and conceptual knowledge (top-down) continued to be established by Jerome Bruner (psychologist) who, starting in the 1940s, studied the ways in which needs, motivations and expectations influence perception, research that came to be known as 'New Look' psychology. In 1981, McClelland and Rumelhart in their seminal paper examined the interaction between processing features (lines and contours) which form letters, which in turn form words. While the features suggest the presence of a word, they found that when letters were situated in the context of a word, people were able to identify them faster than when they were situated in a non-word without semantic context. McClelland and Rumelhart’s parallel processing model describes perception as the meeting of top-down (conceptual) and bottom-up (sensory) elements. In the late 1990s, the idea of top-down and bottom-up processing was translated into a computational model of vision by Rao and Ballard (1999). Their paper demonstrated that there could be a generative model of a scene (top-down processing), which would receive feedback via error signals (how much the visual input varied from the prediction), which would subsequently lead to updating the prediction. The computational model was able to replicate well-established receptive field effects, as well as less understood extra-classical receptive field effects such as end-stopping. Today, the fields of computer science and cognitive science incorporate these same concepts to create the multilayer generative models that underlie machine learning and neural nets (Hinton, 2010). Most of the research literature in the field has been about sensory perception, particularly vision, which is more easily conceptualized. However, the predictive coding framework could also be applied to different neural systems. Taking the sensory system as an example, the brain solves the seemingly intractable problem of modelling distal causes of sensory input through a version of Bayesian inference. It does this by modelling predictions of lower-level sensory inputs via backward connections from relatively higher levels in a cortical hierarchy (Clark, 2013). Constrained by the statistical regularities of the outside world (and certain evolutionarily prepared predictions), the brain encodes top-down generative models at various temporal and spatial scales in order to predict and effectively suppress sensory inputs rising up from lower levels. A comparison between predictions (priors) and sensory input (likelihood) yields a difference measure (e.g. prediction error, free energy, or surprise) which, if it is sufficiently large beyond the levels of expected statistical noise, will cause the generative model to update so that it better predicts sensory input in the future.

[ "Coding (social sciences)", "Adaptive predictive coding", "perceptual inference" ]
Parent Topic
Child Topic
    No Parent Topic