当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Beyond ITQ: Efficient binary multi-view subspace learning for instance retrieval
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2021-07-17 , DOI: 10.1016/j.jvcir.2021.103234
Zhijian Wu 1 , Jun Li 1 , Jianhua Xu 1 , Wankou Yang 2
Affiliation  

The existing hashing methods mainly handle either the feature based nearest-neighbor search or the category-level image retrieval, whereas a few efforts are devoted to instance retrieval problem. In this paper, we propose a binary multi-view fusion framework for directly recovering a latent Hamming subspace from the multi-view features for instance retrieval. More specifically, the multi-view subspace reconstruction and the binary quantization are integrated in a unified framework so as to minimize the discrepancy between the original multi-view high-dimensional Euclidean space and the resulting compact Hamming subspace. Besides, our method is essentially an unsupervised learning scheme without any labeled data involved, and thus can be used in the cases when the supervised information is unavailable or insufficient. Experiments on public benchmark and large-scale datasets reveal that our method achieves competitive retrieval performance comparable to the state-of-the-arts and has excellent scalability in large-scale scenario.



中文翻译:

超越 ITQ:用于实例检索的高效二元多视图子空间学习

现有的散列方法主要处理基于特征的最近邻搜索或类别级图像检索,而一些努力则致力于实例检索问题。在本文中,我们提出了一种二元多视图融合框架,用于从多视图特征中直接恢复潜在的汉明子空间用于实例检索。更具体地说,将多视图子空间重建和二进制量化集成在一个统一的框架中,以最小化原始多视图高维欧几里德空间与由此产生的紧凑汉明子空间之间的差异。此外,我们的方法本质上是一种无监督学习方案,不涉及任何标记数据,因此可以用于监督信息不可用或不足的情况。

更新日期:2021-07-22
down
wechat
bug