Optimal Coding in Biological and Artificial Neural Networks.

2016 
Feature representations in both, biological neural networks in the primate ventral stream and artificial convolutional neural networks trained on object recognition, incresase in complexity and receptive field size with layer depth. Somewhat strikingly, empirical evidence indicates that this analogy extends to the specific representations learned in each layer. This suggests that biological and artificial neural networks share a fundamental organising principle. We shed light on this principle in the framework of optimal coding. Specifically, we first investigate which properties of a code render it robust to transmission over noisy channels and formally prove that for equientropic channels an upper bound on the expected minimum decoding error is attained for codes with maximum marginal entropy. We then show that the pairwise correlation of units in a deep layer of a neural network, that has been trained on an object recognition task, increases when perturbing the distribution of input images, i. e., that the network exhibits properties of an optimally coding system. By analogy, this suggests that the layer-wise similarity of feature representations in biological and artificial neural networks is a result of optimal coding that enables robust transmission of object information over noisy channels. Because we find that in equientropic channels the upper bound on the expected minimum decoding error is independent of the class-conditional entropy, our work further provides a plausible explanation why optimal codes can be learned in unsupervised settings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    2
    Citations
    NaN
    KQI
    []