Identifying outliers in Bayesian hierarchical models: a simulation-based approach
2007
A variety of simulation-based techniques have been proposed for detection of divergent
behaviour at each level of a hierarchical model. We investigate a diagnostic test based on
measuring the conflict between two independent sources of evidence regarding a parameter:
that arising from its predictive prior given the remainder of the data, and that arising
from its likelihood. This test gives rise to a $p$-value that exactly matches or closely
approximates a cross-validatory predictive comparison, and yet is more widely applicable.
Its properties are explored for normal hierarchical models and in an application in which
divergent surgical mortality was suspected. Since full cross-validation is so
computationally demanding, we examine full-data approximations which are shown to have
only moderate conservatism in normal models. A second example concerns criticism of a
complex growth curve model at both observation and parameter levels, and illustrates the
issue of dealing with multiple $p$-values within a Bayesian framework. We conclude with
the proposal of an overall strategy to detecting divergent behaviour in hierarchical
models.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
2
References
85
Citations
NaN
KQI