Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning

2020 
The information bottleneck (IB) principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models. However, multiple competing objectives have been proposed based on this principle. Moreover, the information-theoretic quantities in the objective are difficult to compute for large deep neural networks, and this limits its use as a training objective. In this work, we review these quantities, compare and unify previously proposed objectives and relate them to surrogate objectives more friendly to optimization. We find that these surrogate objectives allow us to apply the information bottleneck to modern neural network architectures. We demonstrate our insights on Permutation-MNIST, MNIST and CIFAR10.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    68
    References
    7
    Citations
    NaN
    KQI
    []