当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Improved Saliency Detection in RGB-D Images Using Two-phase Depth Estimation and Selective Deep Fusion.
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2020-01-30 , DOI: 10.1109/tip.2020.2968250
Chenglizhao Chen , Jipeng Wei , Chong Peng , Weizhong Zhang , Hong Qin

To solve the saliency detection problem in RGB-D images, the depth information plays a critical role in distinguishing salient objects or foregrounds from cluttered backgrounds. As the complementary component to color information, the depth quality directly dictates the subsequent saliency detection performance. However, due to artifacts and the limitation of depth acquisition devices, the quality of the obtained depth varies tremendously across different scenarios. Consequently, conventional selective fusion-based RGB-D saliency detection methods may result in a degraded detection performance in cases containing salient objects with low color contrast coupled with a low depth quality. To solve this problem, we make our initial attempt to estimate additional high-quality depth information, which is denoted by Depth+. Serving as a complement to the original depth, Depth+ will be fed into our newly designed selective fusion network to boost the detection performance. To achieve this aim, we first retrieve a small group of images that are similar to the given input, and then the inter-image, nonlocal correspondences are built accordingly. Thus, by using these inter-image correspondences, the overall depth can be coarsely estimated by utilizing our newly designed depth-transferring strategy. Next, we build fine-grained, object-level correspondences coupled with a saliency prior to further improve the depth quality of the previous estimation. Compared to the original depth, our newly estimated Depth+ is potentially more informative for detection improvement. Finally, we feed both the original depth and the newly estimated Depth+ into our selective deep fusion network, whose key novelty is to achieve an optimal complementary balance to make better decisions toward improving saliency boundaries.

中文翻译:

使用两相深度估计和选择性深度融合改进了RGB-D图像中的显着性检测。

为了解决RGB-D图像中的显着性检测问题,深度信息在区分突出对象或前景与混乱背景方面起着至关重要的作用。作为颜色信息的补充组件,深度质量直接决定着后续的显着性检测性能。然而,由于伪影和深度获取设备的限制,所获得深度的质量在不同情况下差异很大。因此,常规的基于选择性融合的RGB-D显着性检测方法在包含显着物体的情况下会降低检测性能,该显着物体的颜色对比度较低且深度质量较低。为了解决这个问题,我们进行了初步尝试来估计其他高质量的深度信息,用Depth +表示。作为原始深度的补充,Depth +将被馈送到我们新设计的选择性融合网络中,以提高检测性能。为了实现此目标,我们首先检索与给定输入相似的一小组图像,然后相应地构建图像间非本地对应关系。因此,通过使用这些图像间的对应关系,可以利用我们新设计的深度传输策略粗略估计总体深度。接下来,在进一步提高先前估计的深度质量之前,我们将建立细化的对象级对应关系以及显着性。与原始深度相比,我们新估算的Depth +可能会为改进检测提供更多信息。最后,
更新日期:2020-04-22
down
wechat
bug