A cross-modal crowd counting method combining CNN and cross-modal transformer

2023 
Cross-modal crowd counting aims to use the information between different modalities to generate crowd density images, so as to estimate the number of pedestrians more accurately in unconstrained scenes. Due to the huge differences between different modal images, how to effectively fuse the information between different modalities is still a challenging problem. To address this problem, we propose a cross-modal crowd counting method based on CNN and novel cross-modal transformer, which effectively fuses the information between different modalities and boosts the accuracy of crowd counting in unconstrained scenes. Concretely, we first design double CNN branches to capture the modality-specific features of images. After that, we design a novel cross-modal transformer to extract cross-modal global features from the modality-specific features. Furthermore, we a propose cross layer connection structure to connect the front-end information and back-end information of the network by adding different layer features. At the end of the network, we develop a cross- modal attention module to strengthen the cross-modal feature representation by extracting the complementarities between different modal features. The experimental results show that the method combining CNN and novel cross-modal transformer proposed in this paper achieves state-of-the-art performance, which not only effectively improves the accuracy and robustness of cross-modal crowd counting, but also has good generalization under multimodal crowd counting.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []