当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Attentive feature integration network for detecting salient objects in images
Neurocomputing ( IF 5.5 ) Pub Date : 2020-10-01 , DOI: 10.1016/j.neucom.2020.05.083
Qing Zhang , Wenzhao Cui , Yanjiao Shi , Xueqin Zhang , Yunxiang Liu

Abstract Benefiting from the development of convolutional neural networks, salient object detection has yielded a qualitative leap in performance. In recent years, most of deep learning based methods utilize multi-level features and obtain inferred saliency map in a coarse-to-fine manner. However, how to learn and represent powerful features is still a challenge. In this paper, we propose a novel FCN-like approach named attentive feature integration network (AFINet) for pixel-wise salient object detection, which results saliency maps with explicit boundary and uniform highlighted regions. Specifically, it adopts feature enhancement module (FEM) to extract rich and enhanced features from backbone net. A feature discrimination module (FDM) is designed to utilize the predicted saliency map generated by deeper layer to help shallower layer learn useful and discriminative attentive features. Moreover, we introduce the saliency information from deeper layer to the shallower one in saliency prediction module (SPM), which helps shallow side outputs accurately locate salient regions. In addition, we design a saliency fusion module (SFM) to integrate different side outputs for utilizing multi-level features. Finally, a fully connected CRF scheme can be optimally incorporated for obtaining saliency results with a higher accuracy. Both qualitative and quantitative comparisons and evaluations conducted on five publicly benchmark datasets demonstrate that our proposed approach compares favorably against 17 state-of-the-art approaches.

中文翻译:

用于检测图像中显着对象的注意力特征集成网络

摘要 受益于卷积神经网络的发展,显着目标检测在性能上取得了质的飞跃。近年来,大多数基于深度学习的方法利用多级特征并以粗到细的方式获得推断的显着图。然而,如何学习和表示强大的特征仍然是一个挑战。在本文中,我们提出了一种新的类似 FCN 的方法,称为注意力特征集成网络 (AFINet),用于像素级显着对象检测,它产生具有明确边界和统一突出显示区域的显着图。具体来说,它采用特征增强模块(FEM)从骨干网中提取丰富和增强的特征。特征区分模块 (FDM) 旨在利用更深层生成的预测显着图来帮助较浅层学习有用且有区别的注意力特征。此外,我们在显着性预测模块(SPM)中引入了从较深层到较浅层的显着性信息,这有助于浅层输出准确定位显着区域。此外,我们设计了一个显着融合模块(SFM)来集成不同的侧输出以利用多级特征。最后,可以最佳地结合完全连接的 CRF 方案以获得具有更高准确度的显着性结果。在五个公开基准数据集上进行的定性和定量比较和评估表明,我们提出的方法与 17 种最先进的方法相比具有优势。
更新日期:2020-10-01
down
wechat
bug