当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Coupled Real-Synthetic Domain Adaptation for Real-World Deep Depth Enhancement.
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-04-23 , DOI: 10.1109/tip.2020.2988574
Xiao Gu , Yao Guo , Fani Deligianni , Guang-Zhong Yang

Advances in depth sensing technologies have allowed simultaneous acquisition of both color and depth data under different environments. However, most depth sensors have lower resolution than that of the associated color channels and such a mismatch can affect applications that require accurate depth recovery. Existing depth enhancement methods use simplistic noise models and cannot generalize well under real-world conditions. In this paper, a coupled real-synthetic domain adaptation method is proposed, which enables domain transfer between high-quality depth simulators and real depth camera information for super-resolution depth recovery. The method first enables the realistic degradation from synthetic images, and then enhances degraded depth data to high quality with a color-guided sub-network. The key advantage of the work is that it generalizes well to real-world datasets without further training or fine-tuning. Detailed quantitative and qualitative results are presented, and it is demonstrated that the proposed method achieves improved performance compared to previous methods fine-tuned on the specific datasets.

中文翻译:


用于现实世界深度增强的耦合真实合成域适应。



深度传感技术的进步允许在不同环境下同时采集颜色和深度数据。然而,大多数深度传感器的分辨率低于相关颜色通道的分辨率,这种不匹配可能会影响需要精确深度恢复的应用。现有的深度增强方法使用简单的噪声模型,并且不能在现实条件下很好地推广。本文提出了一种耦合的真实合成域自适应方法,该方法能够在高质量深度模拟器和真实深度相机信息之间进行域传输,以实现超分辨率深度恢复。该方法首先实现合成图像的真实退化,然后通过颜色引导子网络将退化的深度数据增强到高质量。这项工作的主要优点是它可以很好地推广到现实世界的数据集,而无需进一步训练或微调。给出了详细的定量和定性结果,并证明了与之前在特定数据集上微调的方法相比,所提出的方法取得了改进的性能。
更新日期:2020-04-23
down
wechat
bug