当前位置: X-MOL 学术arXiv.cs.MM › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised Multi-modal Hashing for Cross-modal retrieval
arXiv - CS - Multimedia Pub Date : 2019-03-26 , DOI: arxiv-1904.00726
Jun Yu, Xiao-Jun Wu

With the advantage of low storage cost and high efficiency, hashing learning has received much attention in the domain of Big Data. In this paper, we propose a novel unsupervised hashing learning method to cope with this open problem to directly preserve the manifold structure by hashing. To address this problem, both the semantic correlation in textual space and the locally geometric structure in the visual space are explored simultaneously in our framework. Besides, the `2;1-norm constraint is imposed on the projection matrices to learn the discriminative hash function for each modality. Extensive experiments are performed to evaluate the proposed method on the three publicly available datasets and the experimental results show that our method can achieve superior performance over the state-of-the-art methods.

中文翻译:

用于跨模态检索的无监督多模态哈希

哈希学习以其存储成本低、效率高的优势在大数据领域备受关注。在本文中,我们提出了一种新颖的无监督哈希学习方法来应对这个开放问题,通过哈希直接保留流形结构。为了解决这个问题,我们的框架同时探索了文本空间中的语义相关性和视觉空间中的局部几何结构。此外,对投影矩阵施加了`2;1-范数约束,以学习每种模态的判别哈希函数。进行了广泛的实验以在三个公开可用的数据集上评估所提出的方法,实验结果表明我们的方法可以实现优于最先进方法的性能。
更新日期:2020-09-29
down
wechat
bug