Discriminative Metric Learning for Partial Label Learning.

2021 
One simple strategy to deal with ambiguity in partial label learning (PLL) is to regard all candidate labels equally as the ground-truth label, and then solve the PLL problem using existing multiclass classification algorithms. However, due to the noisy false-positive labels in the candidate set, these approaches are readily mislead and do not generalize well in testing. Consequently, the method of identifying the ground-truth label straight from the candidate label set has grown popular and effective. When the labeling information in PLL is ambiguous, we ought to take advantage of the data's underlying structure, such as label and feature interdependencies, to conduct disambiguation. Furthermore, while metric learning is an excellent method for supervised learning classification that takes feature and label interdependencies into account, it cannot be used to solve the weekly supervised learning PLL problem directly due to the ambiguity of labeling information in the candidate label set. In this article, we propose an effective PLL paradigm called discriminative metric learning for partial label learning (DML-PLL), which aims to learn a Mahanalobis distance metric discriminatively while identifying the ground-truth label iteratively for PLL. We also design an efficient algorithm to alternatively optimize the metric parameter and the latent ground-truth label in an iterative way. Besides, we prove the convergence of the designed algorithms by two proposed lemmas. We additionally study the computational complexity of the proposed DML-PLL in terms of training and testing time for each iteration. Extensive experiments on both controlled UCI datasets and real-world PLL datasets from diverse domains demonstrate that the proposed DML-PLL regularly outperforms the compared approaches in terms of prediction accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []