Bilinear Attention Networks

Authors:
Jin-Hwa Kim SK T-Brain
Jaehyun Jun Seoul National University
Byoung-Tak Zhang Seoul National University & Surromind Robotics

Introduction:

Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively.In this paper, the authors propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly.

Abstract:

Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.

You may want to know: