Deep medical cross-modal attention hashing

2021 
Medical cross-modal retrieval aims to retrieve semantically similar medical instances across different modalities, such as retrieving X-ray images using radiology reports or retrieving radiology reports using X-ray images. The main challenge for medical cross-modal retrieval are the semantic gap and the small visual differences between different categories of medical images. To address those issues, we present a novel end-to-end deep hashing method, called Deep Medical Cross-Modal Attention Hashing (DMCAH), which extracts the global features utilizing global average pooling and local features by recurrent attention. Specifically, we recursively move from the coarse to fine-grained regions of images to locate discriminative regions more accurately, and recursively extract the discriminative semantic information of texts from the sentence level to the word level. Then, we select the discriminative features by aggregating the finer feature via adaptive attention. Finally, to reduce the semantic gap, we map images and reports features into a common space and obtain the discriminative hash codes. Comprehensive experimental results on large-scale medical dataset MIMIC-CXR and natural scene dataset MS-COCO show that DMCAH can achieve better performance than existing cross-modal hashing methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    0
    Citations
    NaN
    KQI
    []