Hierarchical User Intent Graph Network for Multimedia Recommendation

2021 
Understanding user preference on item context is the key to acquire a high-quality multimedia recommendation. Typically, the pre-existing features of items are derived from pre-trained models (visual features of micro-videos extracted from some neural networks), and then introduced into the recommendation framework (collaborative filtering) to capture user preference. However, we argue that such a paradigm is insufficient to output satisfactory user representations, which hardly profile personal interests well. The key reason is that present works largely leave user intents untouched, then failing to encode such informative representation of users. In this work, we aim to learn multi-level user intents from the co-interacted patterns of items, so as to obtain high-quality representations of users and items and further enhance the recommendation performance. Towards this end, we develop a novel framework, named Hierarchical User Intent Graph Network, which exhibits user intents in a hierarchical graph structure, from the fine-grained to coarse-grained intents. In particular, we get the multi-level user intents by recursively performing two operations: 1) intra-level aggregation, which distills the signal pertinent to user intents from co-interacted item graphs; and 2) inter-level aggregation, which constitutes the supernode in higher levels to model coarser-grained user intents via gathering the nodes representations in the lower ones. Then, we refine the user and item representations as a distribution over the discovered intents, instead of simple preexisting features. To demonstrate the effectiveness of our model, we conducted extensive experiments on three public datasets. Our model achieves significant improvements over the state-of-the-art methods, including MMGCN and DisenGCN. Furthermore, by visualizing the item representations, we provide the semantics of user intents.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []