当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatio-Temporal Learning for Video Deblurring based on Two-Stream Generative Adversarial Network
Neural Processing Letters ( IF 3.1 ) Pub Date : 2021-04-24 , DOI: 10.1007/s11063-021-10520-y
Liyao Song , Quan Wang , Haiwei Li , Jiancun Fan , Bingliang Hu

Video-deblurring has achieved excellent results by using deep learning approaches. How to capture the dynamic spatio-temporal information in the videos is crucial on deblurring. In this paper, we propose a two-stream DeblurGAN which combines a 3D stream with a 2D stream to deblur. The 3D convolution provides spatial and temporal invariance to restore the foreground of frames, while the 2D convolution is sufficient to deal with spatial features, given a relatively consistant background. Thus, our model takes advantage of the great processing power of the 3D stream to handle the foreground which usually contains more dynamical motion blur, and the advantage of the simplicity of the 2D stream to handle the mostly consistent background. We have the full advantage of combining both the 3D convolution and the 2D convolution. Then we take the two-stream model as the generator and adopt the adversarial learning. We test our model on the VideoDeblurring and GOPRO datasets, and compare with other methods which we have listed. Our method outperforms others in the Peak Signal-to-Noise Ratio (PSNR), especially shows a good performance handling the foreground with obvious motion blur.



中文翻译:

基于两流生成对抗网络的时空视频去模糊学习

通过使用深度学习方法,视频去模糊已获得出色的结果。如何捕捉视频中的动态时空信息对于去模糊至关重要。在本文中,我们提出了一种两流DeblurGAN,它将3D流和2D流结合起来进行去模糊。3D卷积提供了空间和时间不变性,以恢复帧的前景,而在给定相对一致的背景下,2D卷积足以处理空间特征。因此,我们的模型利用3D流的强大处理能力来处理通常包含更多动态运动模糊的前景,并利用2D流的简单性来处理大部分一致的背景。我们具有将3D卷积和2D卷积结合在一起的全部优势。然后,我们以两流模型作为生成器,并采用对抗学习。我们在VideoDeblurring和GOPRO数据集上测试了我们的模型,并与我们列出的其他方法进行了比较。我们的方法在峰值信噪比(PSNR)方面优于其他方法,特别是在处理具有明显运动模糊的前景时表现出良好的性能。

更新日期:2021-04-24
down
wechat
bug