Focal and Composed Vision-semantic Modeling for Visual Question Answering

2021 
Visual Question Answering (VQA) is a vital yet challenging task in the field of multimedia comprehension. In order to correctly answer questions about an image, a VQA model requires to sufficiently understand the visual scene, especially the vision-semantic reasonings between the two modalities. Traditional relation-based methods allow to encode the pairwise relations of objects to boost the VQA model performance. However, this simple strategy is deficient to exploit the abundant concepts expressed by the composition of diverse image objects, leading to sub-optimal performance. In this paper, we propose a focal and composed vision-semantic modeling method, which is a trainable end-to-end model, for better vision-semantic redundancy removal and compositionality modeling. Concretely, we first introduce the LENA cell, a plug-and-play reasoning module, which removes redundant semantic by a focal mechanism in the first step, followed by the vision-semantic compositionality modeling for better visual reasoning. We then incorporate the cell into a full LENA network, which progressively refines multimodal composed representations, and can be leveraged to infer the high-order vision-semantic in a multi-step learning way. Extensive experiments on two benchmark datasets, i.e., VQA v2 and VQA-CP v2, verify the superiority of our model as compared with several state-of-the-art baselines.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    0
    Citations
    NaN
    KQI
    []