Generalized Symmetric Nonnegative Latent Factor Analysis for Large-scale Undirected Weighted Networks

2021 
Big-data related applications frequently concern how to analyze large-scale undirected weighted network effectively. Such a network can be quantized into a Symmetric, High-Dimensional and Sparse (SHiDS) matrix, owing to its sparsity and symmetry. For considering these characteristics of an SHiDS matrix with care, a symmetric non-negative latent factor (SNLF) model is proposed. However, representation learning ability of an SNLF model is limited owing to its commonly-adopted learning objective, i.e., Euclidean distance. For addressing this issue, this study proposes a generalized symmetric nonnegative latent factor analysis (GSNL) model. Its main idea is two-fold: a) adopting $\alpha-\beta$ -divergence to generalize SNLF's learning objective for achieving accurate representation ability of an SHiDS matrix; and b) utilizing a self-adaptive scheme on all involved hyperparameters brought by the resultant model for strong practicability. Empirical studies on four SHiDS matrices demonstrate that a GSNL model outperforms its peers regarding accuracy gain and computational efficiency.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    0
    Citations
    NaN
    KQI
    []