On improvability of model selection by model averaging

2021 
Abstract In regression, model averaging (MA) provides an alternative to model selection (MS), and asymptotic efficiency theories have been derived for both MS and MA. Basically, under sensible conditions, MS asymptotically achieves the smallest estimation loss/risk among the candidate models, and MA does so among averaged estimators from the models with convex weights. Clearly, MA can beat MS by any extent in rate of convergence when all the candidate models have large biases that can be canceled out by a MA scheme. To our knowledge, however, a foundational issue has not been addressed in the literature. That is, when there is no advantage of reducing approximation error, does MA offer any significant improvement over MS in regression estimation? In this paper, we answer this question in a nested model setting that has been often used in the frequentist MA research area. A remarkable implication is that the much celebrated asymptotic efficiency of MS (e.g., by AIC) does not necessarily justify MS as commonly interpreted as achieving the best possible performance. In a nutshell, the oracle model (i.e., the unknowable best model among all the candidates) can be significantly improved by MA under certain conditions. A simulation study supports the theoretical findings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    0
    Citations
    NaN
    KQI
    []