An effective recommendation model based on deep representation learning

2021 
Abstract Recommender system has recently attracted a lot of attention in the information service community. Currently, most recommendation models use deep neural networks to learn user preferences for items and make the final recommendations. However, these current models have not effectively captured the deep semantic features of users and items and have not fully used the auxiliary information of the items. This may result in unsatisfactory recommendations for users’ projects. In order to solve the above problem, in this paper, a novel recommendation model called RM-DRL (Recommendation Model based on Deep Representation Learning) was proposed. It mainly consists of two modules: Information Preprocessing and Feature Representation. The former generates the user’s primitive feature vectors and the items used in the latter. The latter consists of two phases: Representation Learning for Item Features (RL-IF) and Representation Learning for User Features (RL-UF). The RL-IF takes the primitive feature vectors of the item as input and uses a multi-layer Convolutional Neural Network (CNN) to learn to accurately produce the semantic feature vector of the item through multi-task learning. In RL-UF, the user primitive feature vectors and semantic feature vectors of the user preference history, and the positive and negative items were taken as input, and a novel Attention-Integrated Gated Recurrent Unit (AIGRU) neural network was proposed to learn to accurately produce user semantic feature vector. After the Feature Representation module converges, the semantic feature vectors of the users and the items can be used to calculate the users’ preferences on the items via vector dot product. Extensive experiments on five real-world datasets show that RM-DRL remarkably outperforms state-of-the-art baselines in solving the recommendation problem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    48
    References
    12
    Citations
    NaN
    KQI
    []