Online deep hashing for both uni-modal and cross-modal retrieval

2022 
The batch-based hash learning paradigm for uni-modal or cross-modal retrieval has made great progress in recent decades. However, methods that are based on this paradigm cannot adapt to the scenario of incoming new data streams continuously; in addition, they are inefficient in terms of training time and memory cost in big data searches because they have to accumulate all the database data and new incoming data before training. Although a few online hash retrieval methods have been proposed to address these issues in recent years, they are based on shallow models, and none of them can perform uni-modal and cross-modal retrieval in one framework. To this end, we propose a novel method, namely, Online Deep Hashing for both Uni-modal and Cross-modal retrieval (ODHUC). For online deep hashing, ODHUC first trains image and text
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []