当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SBHA: Sensitive Binary Hashing Autoencoder for Image Retrieval
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 2023-05-11 , DOI: 10.1109/tcyb.2023.3269756
Ting Wang 1 , Su Lu 2 , Jianjun Zhang 2 , Xuyu Liu 2 , Xing Tian 3 , Wing W. Y. Ng 2 , Wei-neng Chen 2
Affiliation  

Binary hashing is an effective approach for content-based image retrieval, and learning binary codes with neural networks has attracted increasing attention in recent years. However, the training of hashing neural networks is difficult due to the binary constraint on hash codes. In addition, neural networks are easily affected by input data with small perturbations. Therefore, a sensitive binary hashing autoencoder (SBHA) is proposed to handle these challenges by introducing stochastic sensitivity for image retrieval. SBHA extracts meaningful features from original inputs and maps them onto a binary space to obtain binary hash codes directly. Different from ordinary autoencoders, SBHA is trained by minimizing the reconstruction error, the stochastic sensitive error, and the binary constraint error simultaneously. SBHA reduces output sensitivity to unseen samples with small perturbations from training samples by minimizing the stochastic sensitive error, which helps to learn more robust features. Moreover, SBHA is trained with a binary constraint and outputs binary codes directly. To tackle the difficulty of optimization with the binary constraint, we train the SBHA with alternating optimization. Experimental results on three benchmark datasets show that SBHA is competitive and significantly outperforms state-of-the-art methods for binary hashing.

中文翻译:


SBHA:用于图像检索的敏感二进制哈希自动编码器



二进制哈希是基于内容的图像检索的有效方法,并且近年来利用神经网络学习二进制代码引起了越来越多的关注。然而,由于哈希码的二进制约束,哈希神经网络的训练很困难。此外,神经网络很容易受到小扰动的输入数据的影响。因此,提出了一种敏感的二进制哈希自动编码器(SBHA),通过引入图像检索的随机敏感性来应对这些挑战。 SBHA从原始输入中提取有意义的特征并将其映射到二进制空间以直接获得二进制哈希码。与普通自动编码器不同,SBHA是通过同时最小化重建误差、随机敏感误差和二元约束误差来训练的。 SBHA 通过最小化随机敏感误差,降低了对来自训练样本的小扰动的未见样本的输出敏感度,这有助于学习更鲁棒的特征。此外,SBHA 采用二进制约束进行训练,并直接输出二进制代码。为了解决二元约束优化的困难,我们通过交替优化来训练 SBHA。三个基准数据集的实验结果表明,SBHA 具有竞争力,并且显着优于最先进的二进制哈希方法。
更新日期:2023-05-11
down
wechat
bug