Regularized Learning of Neural Network with Application to Sparse PCA.

2019 
The paper presents an implementation of the regularized two-layer neural network and its application to finding sparse components. The main part of the paper concerns the learning of the sparse regularized neural network and its use as auto-encoder. A process of learning of the neural network with non-convex optimization criterion is reduced to convex optimization with constraints in an extended domain. This approach is compared with the dictionary learning procedure. The experimental part presents the comparison of our implementation and the SparsePCA procedure from the Scikit-learn package on different data sets. As a quality of the solution during experiments we take into account: the time of learning, sparsity, and quality of reconstruction. The experiments show that our approach can be competitive when a higher sparsity is needed, and in the case of a large number of attributes relative to the number of instances.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []