Sample-Efficient Optimization Using Bayesian Neural Networks

2018 
Multiple problems in robotics, vision, and graphics can be considered as optimization problems, in which the loss surface can be evaluated only at a collection of sample locations and the problem is regularized with an implicit or explicit prior. In some problems, however, samples are expensive to obtain. This motivates consideration of sample-efficient optimization. A successful approach has been to choose new points to evaluate by considering a distribution over plausible surfaces conditioned on all previous points and their evaluations. To do this the distribution must be updated as each new evaluation is acquired, which has motivated the development of Bayesian methods that update a prior to a posterior over functions. By far, the most common prior distribution in use for this application is the Gaussian process. Here, we consider another family of priors, namely Bayesian Neural Networks. We argue that these exhibit strengths that are different or complementary to Gaussian processes and show that they are competitive or superior on a wide range of test optimization problems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    42
    References
    0
    Citations
    NaN
    KQI
    []