Label Distribution Learning by Mining Local Label Correlations in Self-regulating Clusters Independent of Sample Distance

2021 
Label distribution learning (LDL) is a framework to solve label ambiguity. To improve the performance of LDL, some existing algorithms exploit global and local label correlations. In reality, local correlations are more reasonable than global ones because few correlations can be globally adapted to all samples. The existing algorithms exploiting local correlations assume that the smaller the Euclidean distance between samples, the more likely the samples are to share the same label correlations. Specifically, they all use K-means to divide samples into several clusters, and then mine local correlations in each cluster based on this assumption. However, this assumption is incorrect in some cases and may lead to inappropriate clustering results and biased correlations. In this paper, we propose a novel LDL algorithm which mines local label correlations in self-regulating clusters independent of sample distance. In particular, we introduce clustering with learnable parameters into the model to realize that the clustering can be optimized jointly with the objective function instead of depending on the distance between samples. In this way, our proposed algorithm can mine more accurate local label correlations in these more appropriate clusters. Experimental results on 15 real-world datasets demonstrate the effectiveness of the proposed algorithm.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    3
    References
    0
    Citations
    NaN
    KQI
    []