当前位置: X-MOL 学术Symmetry › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Contiguous Loss for Motion-Based, Non-Aligned Image Deblurring
Symmetry ( IF 2.2 ) Pub Date : 2021-04-09 , DOI: 10.3390/sym13040630
Wenjia Niu , Kewen Xia , Yongke Pan

In general dynamic scenes, blurring is the result of the motion of multiple objects, camera shaking or scene depth variations. As an inverse process, deblurring extracts a sharp video sequence from the information contained in one single blurry image—it is itself an ill-posed computer vision problem. To reconstruct these sharp frames, traditional methods aim to build several convolutional neural networks (CNN) to generate different frames, resulting in expensive computation. To vanquish this problem, an innovative framework which can generate several sharp frames based on one CNN model is proposed. The motion-based image is put into our framework and the spatio-temporal information is encoded via several convolutional and pooling layers, and the output of our model is several sharp frames. Moreover, a blurry image does not have one-to-one correspondence with any sharp video sequence, since different video sequences can create similar blurry images, so neither the traditional pixel2pixel nor perceptual loss is suitable for focusing on non-aligned data. To alleviate this problem and model the blurring process, a novel contiguous blurry loss function is proposed which focuses on measuring the loss of non-aligned data. Experimental results show that the proposed model combined with the contiguous blurry loss can generate sharp video sequences efficiently and perform better than state-of-the-art methods.

中文翻译:

基于运动的非对齐图像去模糊的连续损失

在一般的动态场景中,模糊是多个对象运动,相机抖动或场景深度变化的结果。作为一个相反的过程,去模糊处理从一个模糊图像中包含的信息中提取清晰的视频序列,这本身就是一个病态的计算机视觉问题。为了重建这些清晰的帧,传统方法旨在构建多个卷积神经网络(CNN)来生成不同的帧,从而导致昂贵的计算。为了解决这个问题,提出了一种创新的框架,该框架可以基于一个CNN模型生成多个清晰的帧。基于运动的图像被放入我们的框架中,时空信息通过多个卷积和池化层进行编码,并且我们模型的输出为几个清晰的帧。而且,模糊图像与任何清晰的视频序列都不一一对应,因为不同的视频序列会产生相似的模糊图像,因此传统的pixel2pixel和感知损失均不适用于聚焦未对齐的数据。为了缓解这个问题并对模糊过程建模,提出了一种新颖的连续模糊损失函数,该函数着重于测量未对齐数据的损失。实验结果表明,所提出的模型与连续的模糊损失相结合,可以有效地生成清晰的视频序列,并且比最新的方法具有更好的性能。为了缓解这个问题并对模糊过程建模,提出了一种新颖的连续模糊损失函数,该函数着重于测量未对齐数据的损失。实验结果表明,所提出的模型与连续的模糊损失相结合,可以有效地生成清晰的视频序列,并且比最新的方法具有更好的性能。为了缓解这个问题并对模糊过程建模,提出了一种新颖的连续模糊损失函数,该函数着重于测量未对齐数据的损失。实验结果表明,所提出的模型与连续的模糊损失相结合,可以有效地生成清晰的视频序列,并且性能优于最新方法。
更新日期:2021-04-09
down
wechat
bug