Unifying attentive sparse autoencoder with neural collaborative filtering for recommendation

2022 
The autoencoder network has been proven to be one of the powerful techniques for recommender systems. Currently, the ways of utilizing autoencoder in recommender systems can be divided into two categories: modeling user-item interaction rely solely on autoencoder and integrating autoencoder with other models. Most existing methods based on autoencoder assume that all features of model’s input are equally the same contributing to the final prediction, which can be regarded as attention weight vectors; however, this hypothesis is not reliable, especially when exploring users’ interaction frequency with different items. Moreover, combining autoencoder with traditional methods, the usual strategy is to leverage a linear kernel of the inner product of user and item vectors to predict user preferences, which will lead to insufficient expression power and hurt the performance of recommendation when facing data sparsity and cold start problems. To tackle the above two problems, we propose a novel hybrid deep learning model for top-n recommendation, called attentive stacked sparse autoencoder (A-SAERec), which can capture attention weights vector of a user for items, and then combined with the neural matrix factorization to improve the performance of recommender model. Extensive experiments on four real-world datasets show that our A-SAERec algorithm has significant improvements over state-of-the-art algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []