CPSAM: Channel and Position Squeeze Attention Module

2021 
In deep neural networks, how to model the remote dependency on time or space has always been a problem for scholars. By aggregatingpioneering method of capturing remote dependencies. However, the NL network faces many problems; 1) For different query positions in the image, the long-range dependency modeled by the NL network is quite similar so that it’s a wates of computation cost to build pixel-level pairwise relations. 2) The NL network only focuses on capturing spatial-wise lo a ng-range dependencies and neglects channel-wise attention. Therefore, in response to thesquery-specific global context of each query location, Non-Local (NL) networks propose e problems, we propose the Channel and Position Squeeze Attention Module (CPSAM). Specifically, for a feature map of the middle layer, our module infers attention maps along channel and spatial dimensions in parallel. The Channel Squeeze Attention Module selectively joins the feature of different position by a query-independent feature map. Meanwhile, the Position Squeeze Attention Module uses both avg and max pooling to compress the spatial dimension and Integrate the correlation characteristics between all channel maps. Finally, the outputs of two attention modules are combine together through the conv layer to further enhance feature representation. We have achieved higher accuracy and fewer parameters on the cifar100 and ImageNet1k compared to the NL network. The code will be publicly available soon.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []