Review helpfulness evaluation and recommendation based on an attention model of customer expectation

2021 
With the fast growth of e-commerce, more people choose to purchase products online and browse reviews before making decisions. It is essential to identify helpful reviews, given the typical large number of reviews and the various range of quality. In this paper, we aim to build a model to predict review helpfulness automatically. Our work is inspired by the observation that a customer’s expectation of a review can be greatly affected by review sentiment and the degree to which the customer is aware of pertinent product information. Consequently, a customer may pay more attention to that specific content of a review which contributes more to its helpfulness from their perspective. To model such customer expectations and capture important information from a review text, we propose a novel neural network which leverages review sentiment and product information. Specifically, we encode the sentiment of a review through an attention module, to get sentiment-driven information from review text. We also introduce a product attention layer that fuses information from both the target product and related products, in order to capture the product related information from review text. Our experimental results for the task of identifying whether a review is helpful or not show an AUC improvement of 5.4% and 1.5% over the previous state of the art model on Amazon and Yelp data sets, respectively. We further validate the effectiveness of each attention layer of our model in two application scenarios. The results demonstrate that both attention layers contribute to the model performance, and the combination of them has a synergistic effect. We also evaluate our model performance as a recommender system using three commonly used metrics: NDCG@10, Precision@10 and Recall@10. Our model outperforms PRH-Net, a state-of-the-art model, on all three of these metrics.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    0
    Citations
    NaN
    KQI
    []