High Dimensional Inference with Random Maximum A-Posteriori Perturbations

2016 
In this work we present a new approach for high-dimensional statistical inference that is based on optimization and random perturbations. This framework injects randomness to maximum a-posteriori (MAP) predictors by randomly perturbing its potential function. When the perturbations are of low dimension, sampling the perturb-max prediction is as efficient as MAP optimization. A classic result from extreme value statistics asserts that perturb-max operations generate unbiased samples from the Gibbs distribution using high-dimensional perturbations. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. In this work we show that the expected value of perturb-max inference with low dimensional perturbations can be used sequentially to generate unbiased samples from the Gibbs distribution. We also show that the expected value of the maximal perturbations is a natural bound on the entropy of such perturb-max models. Finally we describe the measure concentration properties of perturb-max values while showing that the deviation of their sampled average from its expectation decays exponentially in the number of samples.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []