Unified Sparse Subspace Learning via Self-Contained Regression

2018 
In order to improve the interpretation of principal components, many sparse principal component analysis (PCA) methods have been proposed by in the form of self-contained regression-type. In this paper, we generalize the steps needed to move from PCA-like methods to its self-contained regression-type, and propose a joint sparse pixel weighted PCA method. More specifically, we generalize a self-contained regression-type framework of graph embedding. Unlike the regression-type of graph embedding relying on the regular low-dimensional data, the self-contained regression-type framework does not rely on the regular low-dimensional data of graph embedding. The learned low-dimensional data in the form of self-contained regression theoretically approximates to the regular low-dimensional data. Under this self-contained regression-type, sparse regularization term can be arbitrarily added, and hence, the learned sparse regression coefficients can interpret the low-dimensional data. By using the joint sparse $\ell _{2,1}$ -norm regularizer, a sparse self-contained regression-type of pixel weighted PCA can be produced. Experiments on six data sets demonstrate that the proposed method is both feasible and effective.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    41
    Citations
    NaN
    KQI
    []