当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Supervised Learning of Semantics-Preserving Hash via Deep Convolutional Neural Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2017-02-09 , DOI: 10.1109/tpami.2017.2666812
Huei-Fang Yang , Kevin Lin , Chu-Song Chen

This paper presents a simple yet effective supervised deep hash approach that constructs binary hash codes from labeled data for large-scale image search. We assume that the semantic labels are governed by several latent attributes with each attribute on or off, and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semantics-preserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. With this design, SSDH has a nice characteristic that classification and retrieval are unified in a single learning model. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a point-wised manner, and thus is scalable to large-scale datasets. SSDH is simple and can be realized by a slight enhancement of an existing deep architecture for classification; yet it is effective and outperforms other hashing approaches on several benchmarks and large datasets. Compared with state-of-the-art approaches, SSDH achieves higher retrieval accuracy, while the classification performance is not sacrificed.

中文翻译:


通过深度卷积神经网络监督语义保留哈希学习



本文提出了一种简单而有效的监督深度哈希方法,该方法从标记数据构建二进制哈希码以进行大规模图像搜索。我们假设语义标签由多个潜在属性控制,每个属性打开或关闭,分类依赖于这些属性。基于这个假设,我们的方法被称为监督语义保留深度哈希(SSDH),将哈希函数构造为深层网络中的潜在层,并通过最小化在分类错误和其他所需哈希代码上定义的目标函数来学习二进制代码特性。通过这种设计,SSDH 具有一个很好的特性,即分类和检索统一在单个学习模型中。此外,SSDH 以逐点方式执行图像表示、哈希码和分类的联合学习,因此可扩展到大规模数据集。 SSDH简单,只需对现有的分类深度架构稍作增强即可实现;然而,它是有效的,并且在多个基准测试和大型数据集上优于其他哈希方法。与最先进的方法相比,SSDH 实现了更高的检索精度,同时不牺牲分类性能。
更新日期:2017-02-09
down
wechat
bug