Multi-label transfer learning via latent graph alignment

2021 
Multi-label transfer learning aiming to learn robust classifiers for the target domain by leveraging knowledge from a source domain has been received considerable attention recently. The core part of such research is similarity measurement. Nevertheless, the existing similarity measurement functions of probability distributions are still too simple to fully describe the similarity of probability distributions.. In order to address this problem, we propose a Multi-label Transfer Learning Via Latent Graph Alignment (G-MLTL). G-MLTL uses subspace learning to make the feature distribution of the target domain consistent with the source domain. At the same time, G-MLTL decomposes the label matrix to ensure data points sharing the same labels to have identical latent semantic representation in the new reconstruction space. The proposed G-MLTL also focuses on directly utilizing Latent Graph Alignment to guide the knowledge transfer process. Extensive experiments demonstrate that G-MLTL significantly outperforms the existing multi-label transfer learning methods. Especially when the number of labels is more than four, the mean Average Precision of G-MLTL is higher than the baseline algorithm by 2.1-10.5%.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    0
    Citations
    NaN
    KQI
    []