当前位置: X-MOL 学术Signal Process. Image Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AFLNet: Adversarial focal loss network for RGB-D salient object detection
Signal Processing: Image Communication ( IF 3.5 ) Pub Date : 2021-03-13 , DOI: 10.1016/j.image.2021.116224
Xiaoli Zhao , Zheng Chen , Jenq-Neng Hwang , Xiwu Shang

Because salient objects usually have fewer data in a scene, the problem of class imbalance is often encountered in salient object detection (SOD). In order to address this issue and achieve the consistent salient objects, we propose an adversarial focal loss network with improving generative adversarial networks for RGB-D SOD (called AFLNet), in which color and depth branches constitute the generator to achieve the saliency map, and adversarial branch with high-order potentials, instead of pixel-wise loss function, refines the output of the generator to obtain contextual information of objects. We infer the adversarial focal loss function to solve the problem of foreground–background class imbalance. To sufficiently fuse the high-level features of color and depth cues, an inception model is adopted in deep layers. We conduct a large number of experiments using our proposed model and its variants, and compare them with state-of-the-art methods. Quantitative and qualitative experimental results exhibit that our proposed approach can improve the accuracy of salient object detection and achieve the consistent objects.



中文翻译:

AFLNet:用于RGB-D显着物体检测的对抗性焦损网络

由于突出对象通常在场景中的数据较少,因此在突出对象检测(SOD)中经常会遇到类不平衡的问题。为了解决此问题并达到一致的显着目标,我们提出了一种具有改进的RGB-D SOD生成对抗网络的对抗焦点损失网络(称为AFLNet),其中颜色和深度分支构成了生成显着图的生成器,具有高阶电势的对抗分支代替了逐像素损失函数,可以优化生成器的输出以获得对象的上下文信息。我们推断对抗焦点损失函数来解决前景-背景类不平衡的问题。为了充分融合颜色和深度提示的高级功能,在深层采用了初始模型。我们使用提出的模型及其变体进行了大量实验,并将其与最新方法进行比较。定量和定性的实验结果表明,我们提出的方法可以提高显着目标检测的准确性,并达到一致的目标。

更新日期:2021-03-18
down
wechat
bug