当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Image Prior
International Journal of Computer Vision ( IF 19.5 ) Pub Date : 2020-03-04 , DOI: 10.1007/s11263-020-01303-4
Dmitry Ulyanov , Andrea Vedaldi , Victor Lempitsky

Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning . In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity (Code and supplementary material are available at https://dmitryulyanov.github.io/deep_image_prior ).

中文翻译:

深度图像先验

深度卷积网络已成为图像生成和恢复的流行工具。通常,它们的出色表现归功于它们从大量示例图像中学习逼真图像先验的能力。在本文中,我们表明,相反,生成器网络的结构足以在任何学习之前捕获大量低级图像统计信息。为了做到这一点,我们展示了随机初始化的神经网络可以用作手工制作的先验,在标准逆问题(例如去噪、超分辨率和修复)中具有出色的结果。此外,相同的先验可用于反转深度神经表征以诊断它们,并基于 flash-no flash 输入对恢复图像。除了其多样化的应用,我们的方法突出了标准生成器网络架构捕获的归纳偏差。它还弥合了两个非常流行的图像恢复方法系列之间的差距:使用深度卷积网络的基于学习的方法和基于手工制作的图像先验(例如自相似性)的无学习方法(代码和补充材料可在 https:// dmitryulyanov.github.io/deep_image_prior)。
更新日期:2020-03-04
down
wechat
bug