A Statistical Approach to Mining Semantic Similarity for Deep Unsupervised Hashing

2021 
The majority of deep unsupervised hashing methods usually first construct pairwise semantic similarity information and then learn to map images into compact hash codes while preserving the similarity structure, which implies that the quality of hash codes highly depends on the constructed semantic similarity structure. However, since the features of images for each kind of semantics usually scatter in high-dimensional space with unknown distribution, previous methods could introduce a large number of false positives and negatives for boundary points of distributions in the local semantic structure based on pairwise cosine distances. Towards this limitation, we propose a general distribution-based metric to depict the pairwise distance between images. Specifically, each image is characterized by its random augmentations that can be viewed as samples from the corresponding latent semantic distribution. Then we estimate the distances between images by calculating the sample distribution divergence of their semantics. By applying this new metric to deep unsupervised hashing, we come up with Distribution-based similArity sTructure rEconstruction (DATE). DATE can generate more accurate semantic similarity information by using non-parametric ball divergence. Moreover, DATE explores both semantic-preserving learning and contrastive learning to obtain high-quality hash codes. Extensive experiments on several widely-used datasets validate the superiority of our DATE.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    0
    Citations
    NaN
    KQI
    []