当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SR-GNN: Spatial Relation-Aware Graph Neural Network for Fine-Grained Image Categorization
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 9-14-2022 , DOI: 10.1109/tip.2022.3205215
Asish Bera 1 , Zachary Wharton 2 , Yonghuai Liu 2 , Nik Bessis 2 , Ardhendu Behera 2
Affiliation  

Over the past few years, a significant progress has been made in deep convolutional neural networks (CNNs)-based image recognition. This is mainly due to the strong ability of such networks in mining discriminative object pose and parts information from texture and shape. This is often inappropriate for fine-grained visual classification (FGVC) since it exhibits high intra-class and low inter-class variances due to occlusions, deformation, illuminations, etc. Thus, an expressive feature representation describing global structural information is a key to characterize an object/scene. To this end, we propose a method that effectively captures subtle changes by aggregating context-aware features from most relevant image-regions and their importance in discriminating fine-grained categories avoiding the bounding-box and/or distinguishable part annotations. Our approach is inspired by the recent advancement in self-attention and graph neural networks (GNNs) approaches to include a simple yet effective relation-aware feature transformation and its refinement using a context-aware attention mechanism to boost the discriminability of the transformed feature in an end-to-end learning process. Our model is evaluated on eight benchmark datasets consisting of fine-grained objects and human-object interactions. It outperforms the state-of-the-art approaches by a significant margin in recognition accuracy.

中文翻译:


SR-GNN:用于细粒度图像分类的空间关系感知图神经网络



在过去的几年里,基于深度卷积神经网络(CNN)的图像识别取得了重大进展。这主要是由于此类网络从纹理和形状中挖掘有判别性的物体姿态和部位信息的强大能力。这通常不适合细粒度视觉分类(FGVC),因为它由于遮挡、变形、照明等而表现出较高的类内差异和较低的类间差异。因此,描述全局结构信息的表达特征表示是分类的关键描述物体/场景的特征。为此,我们提出了一种方法,通过聚合来自最相关图像区域的上下文感知特征及其在区分细粒度类别中的重要性,避免边界框和/或可区分的部分注释,有效地捕获微妙的变化。我们的方法受到自注意力和图神经网络(GNN)方法最近进展的启发,包括一个简单而有效的关系感知特征转换,并使用上下文感知注意机制对其进行细化,以提高转换特征的可辨别性端到端的学习过程。我们的模型在由细粒度对象和人与对象交互组成的八个基准数据集上进行评估。它在识别准确性方面明显优于最先进的方法。
更新日期:2024-08-26
down
wechat
bug