当前位置: X-MOL 学术Image Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Co-occurrence of deep convolutional features for image search
Image and Vision Computing ( IF 4.7 ) Pub Date : 2020-03-25 , DOI: 10.1016/j.imavis.2020.103909
J.I. Forcen , Miguel Pagola , Edurne Barrenechea , Humberto Bustince

Image search can be tackled using deep features from pre-trained Convolutional Neural Networks (CNN). The feature map from the last convolutional layer of a CNN encodes descriptive information from which a discriminative global descriptor can be obtained. We propose a new representation of co-occurrences from deep convolutional features to extract additional relevant information from this last convolutional layer. Combining this co-occurrence map with the feature map, we achieve an improved image representation. We present two different methods to get the co-occurrence representation, the first one based on direct aggregation of activations, and the second one, based on a trainable co-occurrence representation. The image descriptors derived from our methodology improve the performance in very well-known image retrieval datasets as we prove in the experiments.



中文翻译:

共现的深卷积功能对图像搜索

可以使用预训练的卷积神经网络(CNN)的深层功能解决图像搜索。来自CNN的最后一个卷积层的特征图对描述性信息进行编码,从中可以获取判别性全局描述符。我们提出了来自深度卷积特征的共现的新表示形式,以从最后的卷积层中提取其他相关信息。将该同现图与特征图相结合,我们获得了改进的图像表示。我们提出了两种不同的方法来获得共现表示,第一种基于激活的直接聚集,第二种基于可训练的共现表示。正如我们在实验中所证明的那样,从我们的方法学派生的图像描述符提高了非常知名的图像检索数据集的性能。

更新日期:2020-03-25
down
wechat
bug