Laplacian Welsch Regularization for Robust Semi-supervised Dictionary Learning

2019 
Semi-supervised dictionary learning aims to find a suitable dictionary by utilizing limited labeled examples and massive unlabeled examples, so that any input can be sparsely reconstructed by the atoms in a proper way. However, existing algorithms will suffer from large reconstruction error due to the presence of outliers. To enhance the robustness of existing methods, this paper introduces an upper-bounded, smooth and nonconvex Welsch loss which is able to constrain the adverse effect brought by outliers. Besides, we adopt the Laplacian regularizer to enforce similar examples to share similar reconstruction coefficients. By combining Laplacian regularizer and Welsch loss into a unified framework, we propose a novel semi-supervised dictionary learning algorithm termed “Laplacian Welsch Regularization” (LWR). To handle the model non-convexity caused by the Welsch loss, we adopt Half-Quadratic (HQ) optimization algorithm to solve the model efficiently. Experimental results on various real-world datasets show that LWR performs robustly to outliers and achieves the top-level results when compared with the existing algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []