当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2018-01-05 , DOI: 10.1109/tpami.2018.2789887
Fumin Shen , Yan Xu , Li Liu , Yang Yang , Zi Huang , Heng Tao Shen

Recent vision and learning studies show that learning compact hash codes can facilitate massive data processing with significantly reduced storage and computation. Particularly, learning deep hash functions has greatly improved the retrieval performance, typically under the semantic supervision. In contrast, current unsupervised deep hashing algorithms can hardly achieve satisfactory performance due to either the relaxed optimization or absence of similarity-sensitive objective. In this work, we propose a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization. The key difference from the widely-used two-step hashing method is that the output representations of the learned deep model help update the similarity graph matrix, which is then used to improve the subsequent code optimization. In addition, for producing high-quality binary codes, we devise an effective discrete optimization algorithm which can directly handle the binary constraints with a general hashing loss. Extensive experiments validate the efficacy of SADH, which consistently outperforms the state-of-the-arts by large gaps.

中文翻译:

具有相似性自适应和离散优化的无监督深度哈希

最近的视觉和学习研究表明,学习紧凑的哈希码可以促进海量数据处理,而存储和计算量却大大减少。特别是,学习深度哈希函数通常在语义监督下极大地提高了检索性能。相反,由于宽松的优化或缺乏相似敏感目标,当前的无监督深度哈希算法很难获得令人满意的性能。在这项工作中,我们提出了一个简单而有效的无监督哈希框架,称为相似自适应深度哈希(SADH),它交替进行以下三个训练模块:深度哈希模型训练,相似度图更新和二进制代码优化。与广泛使用的两步哈希方法的主要区别在于,学习到的深度模型的输出表示形式有助于更新相似度图矩阵,然后将其用于改进后续代码优化。此外,为了产生高质量的二进制代码,我们设计了一种有效的离散优化算法,该算法可以直接处理具有一般哈希损失的二进制约束。广泛的实验验证了SADH的功效,该功效始终以最大的差距领先于最新技术。
更新日期:2018-11-05
down
wechat
bug