From edge data to recommendation: A double attention-based deformable convolutional network

2021 
Recommender systems (RSs) have become crucial parts in most web-scale applications, and the data sparsity is still one of the serious problems in RSs. Recently, the data sparsity can be tremendously alleviated by making use of informative data obtained from applications of edge devices and deep learning technologies with powerful data processing ability. In the face of extremely sparse rating data, the rich semantic information in reviews and the powerful feature extraction ability of convolutional neural network (CNN) contribute greatly to the improvement of recommendation performance. However, due to the complexity of natural language semantic components, it is common for many phrases in reviews to be separated by other words. Therefore, for the semantic information with unfixed intervals, the fixed geometric structure of CNN may lead to insufficient understanding of the user intention. Moreover, the usefulness of different reviews and the importance of different words in each review are various, which is vital for accurate modeling. In this paper, we propose a Double Attention-based Deformable Convolutional Network called DADCN for recommendation. In the proposed DADCN, two parallel deformable convolutional networks, which adopt the word-level and review-level attention mechanisms, are designed to flexibly extract features of both users and items from reviews. The parallel deformable convolutional networks jointly learn user preferences and item attributes, which is helpful to deepen the understanding of users’ attitudes. The word-level and review-level attention mechanisms are applied to intensify the critical words and informative reviews by assigning relatively high attention weights to them. Extensive experimental results on four real-world datasets demonstrate that the proposed DADCN outperforms four baseline methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    3
    Citations
    NaN
    KQI
    []