Maximal relevance and optimal learning machines

2021 
We show that the mutual information between the representation of a learning machine and the hidden features that it extracts from data is bounded from below by the relevance, which is the entropy of the model's energy distribution. Models with maximal relevance -- that we call Optimal Learning Machines (OLM) -- are hence expected to extract maximally informative representations. We explore this principle in a range of models. For fully connected Ising models and we show that {\em i)} OLM are characterised by inhomogeneous distributions of couplings, and that {\em ii)} their learning performance is affected by sub-extensive features that are elusive to a thermodynamic treatment. On specific learning tasks, we find that likelihood maximisation is achieved by models with maximal relevance. Training of Restricted Boltzmann Machines on the MNIST benchmark shows that learning is associated with a broadening of the spectrum of energy levels and that the internal representation of the hidden layer approaches the maximal relevance that can be achieved in a finite dataset. Finally, we discuss a Gaussian learning machine that clarifies that learning hidden features is conceptually different from parameter estimation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    0
    Citations
    NaN
    KQI
    []