Sensible Priors for Sparse Bayesian Learning

2007 
Sparse Bayesian learning suffers from impractical, overconfident predictions where the uncertainty tends to be maximal around the observations. We propose an alternative treatment that breaks the rigidity of the implied prior through decorrelation, and consequently gives reasonable and intuitive error bars. The attractive computational efficiency is retained; learning leads to sparse solutions. An interesting by-product is the ability to model non-stationarity and input-dependent noise. 1 Sparse Bayesian learning Finite linear regression models are attractive for computational reasons and because they are easily interpreted. In these models, the regression function is simply a weighted linear sum of M basis functions φ1(x), . . . , φM(x):
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    5
    Citations
    NaN
    KQI
    []