SPOCC: Scalable POssibilistic Classifier Combination - toward robust aggregation of classifiers
2020
Abstract We investigate a problem in which each member of a group of learners is trained separately to solve the same classification task. Each learner has access to a training dataset (possibly with overlap across learners) but each trained classifier can be evaluated on a validation dataset. We propose a new approach to aggregate the learner predictions in the possibility theory framework. For each classifier prediction, we build a possibility distribution assessing how likely the classifier prediction is correct using frequentist probabilities estimated on the validation set. The possibility distributions are aggregated using an adaptive t-norm that can accommodate dependency and poor accuracy of the classifier predictions. We prove that the proposed approach possesses a number of desirable classifier combination robustness properties. Moreover, the method is agnostic on the base learners, scales well in the number of aggregated classifiers and is incremental as a new classifier can be appended to the ensemble by building upon previously computed parameters and structures. A python implementation can be downloaded at this link .
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
55
References
4
Citations
NaN
KQI