Dense Attention-Guided Network for Boundary-Aware Salient Object Detection
2021
Recently, salient object detection methods have achieved significant performance with the development of deep supervised learning. However, most existing methods just simply combine low-level and high-level features, which do not consider that the features of each level should contribute to the features of other levels during learning. To overcome the situation, this paper presents a Dense Attention-guided Network (DANet), which builds dense attention-guided information flows to integrate multi-level features. Specifically, we propose a Residual Attention Module (RAM) to highlight important features and suppress unimportant ones or background noises. In the network, the attention-guided features are transferred to other levels through dense connections, and a Feature Aggregation Module (FAM) is employed to adaptively fuse the multi-level feature maps. For accurate boundary estimation, we further design a novel boundary loss function to preserve the edges of salient regions. The experiments show that the proposed DANet achieves state-of-the-art performance on six widely used salient object detection benchmarks under different evaluation metrics. Besides, Our method runs at a speed of 26 fps on a single GPU and does not need any pre-processing and post-processing.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
32
References
0
Citations
NaN
KQI