当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Reinforcement Hashing with Redundancy Elimination for Effective Image Retrieval
Pattern Recognition ( IF 7.5 ) Pub Date : 2020-04-01 , DOI: 10.1016/j.patcog.2019.107116
Juexu Yang , Yuejie Zhang , Rui Feng , Tao Zhang , Weiguo Fan

Abstract Hashing is one of the most promising techniques in approximate nearest neighbor search due to its time efficiency and low cost in memory. Recently, with the help of deep learning, deep supervised hashing can perform representation learning and compact hash code learning jointly in an end-to-end style, and obtains better retrieval accuracy compared to non-deep methods. However, most deep hashing methods are trained with a pair-wise loss or triplet loss in a mini-batch style, which makes them inefficient at data sampling and cannot preserve the global similarity information. Besides that, many existing methods generate hash codes with redundant or even harmful bits, which is a waste of space and may lower the retrieval accuracy. In this paper, we propose a novel deep reinforcement hashing model with redundancy elimination called Deep Reinforcement De-Redundancy Hashing (DRDH), which can fully exploit large-scale similarity information and eliminate redundant hash bits with deep reinforcement learning. DRDH conducts hash code inference in a block-wise style, and uses Deep Q Network (DQN) to eliminate redundant bits. Very promising results have been achieved on four public datasets, i.e., CIFAR-10, NUS-WIDE, MS-COCO, and Open-Images-V4, which demonstrate that our method can generate highly compact hash codes and yield better retrieval performance than those of state-of-the-art methods.

中文翻译:

用于有效图像检索的冗余消除深度强化散列

摘要 散列由于其时间效率和内存成本低,是近似最近邻搜索中最有前途的技术之一。最近,在深度学习的帮助下,深度监督哈希可以以端到端的方式联合进行表示学习和紧凑哈希码学习,并且与非深度方法相比获得了更好的检索精度。然而,大多数深度散列方法都是用小批量样式的成对损失或三重损失来训练的,这使得它们在数据采样方面效率低下,并且无法保留全局相似性信息。除此之外,许多现有方法生成具有冗余甚至有害位的哈希码,这既浪费空间又可能降低检索精度。在本文中,我们提出了一种新颖的具有冗余消除的深度强化哈希模型,称为深度强化去冗余哈希(DRDH),它可以充分利用大规模相似性信息并通过深度强化学习消除冗余哈希位。DRDH 以块方式进行哈希码推理,并使用深度 Q 网络 (DQN) 消除冗余位。在四个公共数据集上取得了非常有希望的结果,即 CIFAR-10、NUS-WIDE、MS-COCO 和 Open-Images-V4,这表明我们的方法可以生成高度紧凑的哈希码并产生比那些更好的检索性能最先进的方法。DRDH 以块方式进行哈希码推理,并使用深度 Q 网络 (DQN) 消除冗余位。在四个公共数据集上取得了非常有希望的结果,即 CIFAR-10、NUS-WIDE、MS-COCO 和 Open-Images-V4,这表明我们的方法可以生成高度紧凑的哈希码并产生比那些更好的检索性能最先进的方法。DRDH 以块方式进行哈希码推理,并使用深度 Q 网络 (DQN) 消除冗余位。在四个公共数据集上取得了非常有希望的结果,即 CIFAR-10、NUS-WIDE、MS-COCO 和 Open-Images-V4,这表明我们的方法可以生成高度紧凑的哈希码并产生比那些更好的检索性能最先进的方法。
更新日期:2020-04-01
down
wechat
bug