当前位置: X-MOL 学术Comput. Intell. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Triplet Deep Hashing with Joint Supervised Loss Based on Deep Neural Networks.
Computational Intelligence and Neuroscience ( IF 3.120 ) Pub Date : 2019-10-09 , DOI: 10.1155/2019/8490364
Mingyong Li 1 , Ziye An 1 , Qinmin Wei 1 , Kaiyue Xiang 1 , Yan Ma 1
Affiliation  

In recent years, with the explosion of multimedia data from search engines, social media, and e-commerce platforms, there is an urgent need for fast retrieval methods for massive big data. Hashing is widely used in large-scale and high-dimensional data search because of its low storage cost and fast query speed. Thanks to the great success of deep learning in many fields, the deep learning method has been introduced into hashing retrieval, and it uses a deep neural network to learn image features and hash codes simultaneously. Compared with the traditional hashing methods, it has better performance. However, existing deep hashing methods have some limitations; for example, most methods consider only one kind of supervised loss, which leads to insufficient utilization of supervised information. To address this issue, we proposed a triplet deep hashing method with joint supervised loss based on the convolutional neural network (JLTDH) in this work. The proposed method JLTDH combines triplet likelihood loss and linear classification loss; moreover, the triplet supervised label is adopted, which contains richer supervised information than that of the pointwise and pairwise labels. At the same time, in order to overcome the cubic increase in the number of triplets and make triplet training more effective, we adopt a novel triplet selection method. The whole process is divided into two stages: In the first stage, taking the triplets generated by the triplet selection method as the input of the CNN, the three CNNs with shared weights are used for image feature learning, and the last layer of the network outputs a preliminary hash code. In the second stage, relying on the hash code of the first stage and the joint loss function, the neural network model is further optimized so that the generated hash code has higher query precision. We perform extensive experiments on the three public benchmark datasets CIFAR-10, NUS-WIDE, and MS-COCO. Experimental results demonstrate that the proposed method outperforms the compared methods, and the method is also superior to all previous deep hashing methods based on the triplet label.

中文翻译:

基于深度神经网络的具有联合监督损失的三元组深度哈希。

近年来,随着来自搜索引擎,社交媒体和电子商务平台的多媒体数据的爆炸式增长,迫切需要快速检索海量大数据的方法。散列因其存储成本低,查询速度快而广泛用于大规模和高维数据搜索。得益于深度学习在许多领域的巨大成功,深度学习方法已被引入到哈希检索中,它使用深度神经网络同时学习图像特征和哈希码。与传统的散列方法相比,它具有更好的性能。但是,现有的深度哈希方法有一些局限性。例如,大多数方法仅考虑一种监督损失,这导致监督信息的利用不足。为了解决这个问题,在这项工作中,我们提出了一种基于卷积神经网络(JLTDH)的具有联合监督丢失的三元组深度哈希方法。提出的方法JLTDH结合了三重态似然损失和线性分类损失。此外,还采用了三元组有监督标签,该标签包含比点对和成对标签更丰富的监督信息。同时,为了克服三胞胎数量的三次增加,并使三重态训练更有效,我们采用了一种新颖的三重态选择方法。整个过程分为两个阶段:第一阶段,以三重态选择方法生成的三元组作为CNN的输入,将三个权重相同的CNN用于图像特征学习,最后一层网络输出初步的哈希码。在第二阶段 依靠第一阶段的哈希码和联合损失函数,进一步优化了神经网络模型,使生成的哈希码具有更高的查询精度。我们对三个公开基准数据集CIFAR-10,NUS-WIDE和MS-COCO进行了广泛的实验。实验结果表明,所提出的方法优于已比较的方法,并且该方法也优于所有以前基于三元组标签的深度哈希方法。
更新日期:2019-10-09
down
wechat
bug