当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DAGCN: Dynamic and Adaptive Graph Convolutional Network for Salient Object Detection
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 11-14-2022 , DOI: 10.1109/tnnls.2022.3219245
Ce Li 1 , Fenghua Liu 1 , Zhiqiang Tian 2 , Shaoyi Du 3 , Yang Wu 4
Affiliation  

Deep-learning-based salient object detection (SOD) has achieved significant success in recent years. The SOD focuses on the context modeling of the scene information, and how to effectively model the context relationship in the scene is the key. However, it is difficult to build an effective context structure and model it. In this article, we propose a novel SOD method called dynamic and adaptive graph convolutional network (DAGCN) that is composed of two parts, adaptive neighborhood-wise graph convolutional network (AnwGCN) and spatially restricted K-nearest neighbors (SRKNN). The AnwGCN is novel adaptive neighborhood-wise graph convolution, which is used to model and analyze the saliency context. The SRKNN constructs the topological relationship of the saliency context by measuring the non-Euclidean spatial distance within a limited range. The proposed method constructs the context relationship as a topological graph by measuring the distance of the features in the non-Euclidean space, and conducts comparative modeling of context information through AnwGCN. The model has the ability to learn the metrics from features and can adapt to the hidden space distribution of the data. The description of the feature relationship is more accurate. Through the convolutional kernel adapted to the neighborhood, the model obtains the structure learning ability. Therefore, the graph convolution process can adapt to different graph data. Experimental results demonstrate that our solution achieves satisfactory performance on six widely used datasets and can also effectively detect camouflaged objects. Our code will be available at: https://github.com/CSIM-LUT/DAGCN.git.

中文翻译:


DAGCN:用于显着目标检测的动态自适应图卷积网络



近年来,基于深度学习的显着目标检测(SOD)取得了巨大的成功。 SOD重点关注场景信息的上下文建模,如何有效地建模场景中的上下文关系是关键。然而,构建有效的上下文结构并对其建模是很困难的。在本文中,我们提出了一种新颖的 SOD 方法,称为动态自适应图卷积网络(DAGCN),它由两部分组成:自适应邻域图卷积网络(AnwGCN)和空间限制 K-近邻网络(SRKNN)。 AnwGCN 是一种新颖的自适应邻域图卷积,用于建模和分析显着性上下文。 SRKNN通过测量有限范围内的非欧空间距离来构建显着上下文的拓扑关系。该方法通过测量非欧空间中特征的距离将上下文关系构建为拓扑图,并通过AnwGCN对上下文信息进行比较建模。该模型具有从特征中学习度量的能力,并且能够适应数据的隐藏空间分布。特征关系的描述更加准确。通过适应邻域的卷积核,模型获得结构学习能力。因此,图卷积过程可以适应不同的图数据。实验结果表明,我们的解决方案在六个广泛使用的数据集上取得了令人满意的性能,并且还可以有效地检测伪装物体。我们的代码位于:https://github.com/CSIM-LUT/DAGCN.git。
更新日期:2024-08-26
down
wechat
bug