Modality and Event Adversarial Networks for Multi-Modal Fake News Detection

2022 
With the popularity of news on social media, fake news has become an important issue for the public and government. There exist some fake news detection methods that focus on information exploration and utilization from multiple modalities, e.g., text and image. However, how to effectively learn both modality-invariant and event-invariant discriminant features is still a challenge. In this paper, we propose a novel approach named Modality and Event Adversarial Networks (MEAN) for fake news detection. It contains two parts: a multi-modal generator and a dual discriminator. The multi-modal generator extracts latent discriminant feature representations of text and image modalities. A decoder is adopted to reduce information loss in the generation process for each modality. The dual discriminator includes a modality discriminator and an event discriminator. The discriminator learns to classify the event or the modality of features, and network training is guided by the adversarial scheme. Experiments on two widely used datasets show that MEAN can perform better than state-of-the-art related multi-modal fake news detection methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    0
    Citations
    NaN
    KQI
    []