当前位置: X-MOL 学术ACM Trans. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust Unsupervised Cross-modal Hashing for Multimedia Retrieval
ACM Transactions on Information Systems ( IF 5.4 ) Pub Date : 2020-06-05 , DOI: 10.1145/3389547
Miaomiao Cheng 1 , Liping Jing 1 , Michael K. Ng 2
Affiliation  

With the quick development of social websites, there are more opportunities to have different media types (such as text, image, video, etc.) describing the same topic from large-scale heterogeneous data sources. To efficiently identify the inter-media correlations for multimedia retrieval, unsupervised cross-modal hashing (UCMH) has gained increased interest due to the significant reduction in computation and storage. However, most UCMH methods assume that the data from different modalities are well paired. As a result, existing UCMH methods may not achieve satisfactory performance when partially paired data are given only. In this article, we propose a new-type of UCMH method called robust unsupervised cross-modal hashing ( RUCMH ). The major contribution lies in jointly learning modal-specific hash function, exploring the correlations among modalities with partial or even without any pairwise correspondence, and preserving the information of original features as much as possible. The learning process can be modeled via a joint minimization problem, and the corresponding optimization algorithm is presented. A series of experiments is conducted on four real-world datasets (Wiki, MIRFlickr, NUS-WIDE, and MS-COCO). The results demonstrate that RUCMH can significantly outperform the state-of-the-art unsupervised cross-modal hashing methods, especially for the partially paired case, which validates the effectiveness of RUCMH.

中文翻译:

用于多媒体检索的鲁棒无监督跨模态散列

随着社交网站的快速发展,从大规模异构数据源中描述同一主题的不同媒体类型(如文本、图像、视频等)的机会越来越多。为了有效地识别多媒体检索的媒体间相关性,由于计算和存储的显着减少,无监督跨模态哈希(UCMH)引起了越来越多的兴趣。然而,大多数 UCMH 方法都假设来自不同模式的数据是良好配对的。因此,现有的 UCMH 方法在仅给出部分配对数据时可能无法达到令人满意的性能。在本文中,我们提出了一种新型的 UCMH 方法,称为鲁棒无监督跨模态哈希(鲁克姆)。主要贡献在于联合学习特定于模态的哈希函数,探索模态之间有部分甚至没有任何成对对应的相关性,并尽可能地保留原始特征的信息。学习过程可以通过联合最小化问题建模,并提出相应的优化算法。在四个真实数据集(Wiki、MIRFlickr、NUS-WIDE 和 MS-COCO)上进行了一系列实验。结果表明,RUCMH 可以显着优于最先进的无监督跨模态哈希方法,特别是对于部分配对的情况,这验证了 RUCMH 的有效性。
更新日期:2020-06-05
down
wechat
bug