Multi-Modal fake news Detection on Social Media with Dual Attention Fusion Networks

2021 
Most of the existed fake news detection works on social media driven-fake news mainly focused on text. However, more and more social media platforms like Twitter, facebook, etc, allow users to create multi-modal contents, including text, image and video. Hence, it is obvious that only investigating text contents is insufficient to achieve solid detection. In this paper, we study the fake news on social media platforms composed of multimodal contents (text and images), and propose Dual Attention Fusion Networks for fake news detection on social media. We explore three modalities, (text modality, image modality and image attributes modality), and further propose a Dual Attention Fusion Networks (DAFN) model for this task. First, our proposed model extracts text modality and image modality, respectively. We then pass combinations of image attributes modality and text modality through BERT to extract text features. Finally, we reconstruct features of three modalities and fuse them into a feature vector for prediction. Our method is verified on realworld datasets consisting of collected social media platforms. Experiments show that the our method achieves promising results on real world datasets. outperforming all baseline models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []