Discrete matrix factorization hashing for cross-modal retrieval

2021 
Cross-modal hashing has recently attracted considerable attention in the large-scale retrieval task due to its low storage cost and high retrieval efficiency. However, the existing hashing methods still have some issues that need to be further solved. For example, most existing cross-modal hashing methods convert the original data into a common Hamming space to learn unified hash codes, which ignores the specific properties of multi-modal data. In addition, most of them relax the discrete constraint to learn hash codes, which may lead to quantization loss and suboptimal performance. In order to address the above problems, this paper proposes a novel cross-modal retrieval method, named discrete matrix factorization hashing (DMFH). DMFH is a two-stage approach. In the first stage, given training data, DMFH exploits the matrix factorization technique to learn modality-specific semantic representation for each modality, then generates the corresponding hash codes by linear projection. Meanwhile, in order to ensure that the hash codes can preserve the semantic similarity between different modalities, DMFH optimizes the hash codes by an affinity matrix constructed from the label information. During the first stage, DMFH proposes a discrete optimal algorithm to solve the discrete constraint problem in learning hash codes. In the second stage, given the hash codes learned in the first stage, DMFH utilizes kernel logistic regression to learn the nonlinear features from the unseen instance, then generates corresponding hash codes for each modality. Extensive experimental results on three public benchmark datasets show that the proposed DMFH outperforms several state-of-art cross-modal hashing methods in terms of accuracy and efficiency.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    0
    Citations
    NaN
    KQI
    []