当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fine Tuning of Deep Contexts Toward Improved Perceptual Quality of In-Paintings
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 2021-09-08 , DOI: 10.1109/tcyb.2021.3105398
Qinglong Chang 1 , Kwok-Wai Hung 2 , Jianmin Jiang 1
Affiliation  

Over the recent years, a number of deep learning approaches are successfully introduced to tackle the problem of image in-painting for achieving better perceptual effects. However, there still exist obvious hole-edge artifacts in these deep learning-based approaches, which need to be rectified before they become useful for practical applications. In this article, we propose an iteration-driven in-painting approach, which combines the deep context model with the backpropagation mechanism to fine-tune the learning-based in-painting process and hence, achieves further improvement over the existing state of the arts. Our iterative approach fine tunes the image generated by a pretrained deep context model via backpropagation using a weighted context loss. Extensive experiments on public available test sets, including the CelebA, Paris Streets, and PASCAL VOC 2012 dataset, show that our proposed method achieves better visual perceptual quality in terms of hole-edge artifacts compared with the state-of-the-art in-painting methods using various context models.

中文翻译:


微调深层背景以提高绘画的感知质量



近年来,许多深度学习方法被成功引入来解决图像修复问题,以实现更好的感知效果。然而,这些基于深度学习的方法仍然存在明显的孔边缘伪影,在实际应用之前需要对其进行纠正。在本文中,我们提出了一种迭代驱动的修复方法,它将深度上下文模型与反向传播机制相结合,以微调基于学习的修复过程,从而实现对现有技术的进一步改进。我们的迭代方法通过使用加权上下文损失的反向传播来微调由预训练的深度上下文模型生成的图像。对公共可用测试集(包括 CelebA、Paris Streets 和 PASCAL VOC 2012 数据集)的大量实验表明,与最先进的方法相比,我们提出的方法在孔边缘伪影方面实现了更好的视觉感知质量。使用各种上下文模型的绘画方法。
更新日期:2021-09-08
down
wechat
bug