当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive Context-Aware Multi-Modal Network for Depth Completion
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2021-05-25 , DOI: 10.1109/tip.2021.3079821
Shanshan Zhao , Mingming Gong , Huan Fu , Dacheng Tao

Depth completion aims to recover a dense depth map from the sparse depth data and the corresponding single RGB image. The observed pixels provide the significant guidance for the recovery of the unobserved pixels’ depth. However, due to the sparsity of the depth data, the standard convolution operation, exploited by most of existing methods, is not effective to model the observed contexts with depth values. To address this issue, we propose to adopt the graph propagation to capture the observed spatial contexts. Specifically, we first construct multiple graphs at different scales from observed pixels. Since the graph structure varies from sample to sample, we then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively. Furthermore, considering the mutli-modality of input data, we exploit the graph propagation on the two modalities respectively to extract multi-modal representations. Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively. The proposed strategy preserves the original information for one modality and also absorbs complementary information from the other through learning the adaptive gating weights. Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks, i.e. , KITTI and NYU-v2, and at the same time has fewer parameters than latest models. Our code is available at: https://github.com/sshan-zhao/ACMNet .

中文翻译:

用于深度补全的自适应上下文感知多模态网络

深度补全旨在从稀疏深度数据和相应的单个 RGB 图像中恢复密集的深度图。观察到的像素为恢复未观察到的像素深度提供了重要指导。然而,由于深度数据的稀疏性,大多数现有方法利用的标准卷积操作不能有效地用深度值对观察到的上下文进行建模。为了解决这个问题,我们建议采用图传播来捕获观察到的空间上下文。具体来说,我们首先从观察到的像素构建不同尺度的多个图。由于图结构因样本而异,因此我们将注意力机制应用于传播,这鼓励网络自适应地对上下文信息进行建模。此外,考虑到输入数据的多模态,我们分别利用两种模态上的图传播来提取多模态表示。最后,我们引入了对称门控融合策略来有效地利用提取的多模态特征。所提出的策略保留了一种模态的原始信息,并通过学习自适应门控权重从另一种模态中吸收补充信息。我们的模型名为自适应上下文感知多模态网络 (ACMNet),在两个基准测试中实现了最先进的性能,所提出的策略保留了一种模态的原始信息,并通过学习自适应门控权重从另一种模态中吸收补充信息。我们的模型名为自适应上下文感知多模态网络 (ACMNet),在两个基准测试中实现了最先进的性能,所提出的策略保留了一种模态的原始信息,并通过学习自适应门控权重从另一种模态中吸收补充信息。我们的模型名为自适应上下文感知多模态网络 (ACMNet),在两个基准测试中实现了最先进的性能,IE 、KITTI 和 NYU-v2,同时参数比最新型号少。我们的代码可在以下位置获得:https://github.com/sshan-zhao/ACMNet .
更新日期:2021-06-01
down
wechat
bug