当前位置:
X-MOL 学术
›
arXiv.cs.MM
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Interpreting CNN for Low Complexity Learned Sub-pixel Motion Compensation in Video Coding
arXiv - CS - Multimedia Pub Date : 2020-06-11 , DOI: arxiv-2006.06392 Luka Murn, Saverio Blasi, Alan F. Smeaton, Noel E. O'Connor, Marta Mrak
arXiv - CS - Multimedia Pub Date : 2020-06-11 , DOI: arxiv-2006.06392 Luka Murn, Saverio Blasi, Alan F. Smeaton, Noel E. O'Connor, Marta Mrak
Deep learning has shown great potential in image and video compression tasks.
However, it brings bit savings at the cost of significant increases in coding
complexity, which limits its potential for implementation within practical
applications. In this paper, a novel neural network-based tool is presented
which improves the interpolation of reference samples needed for fractional
precision motion compensation. Contrary to previous efforts, the proposed
approach focuses on complexity reduction achieved by interpreting the
interpolation filters learned by the networks. When the approach is implemented
in the Versatile Video Coding (VVC) test model, up to 4.5% BD-rate saving for
individual sequences is achieved compared with the baseline VVC, while the
complexity of learned interpolation is significantly reduced compared to the
application of full neural network.
中文翻译:
解释 CNN 以实现视频编码中的低复杂度学习子像素运动补偿
深度学习在图像和视频压缩任务中显示出巨大的潜力。然而,它以编码复杂性显着增加为代价带来了比特节省,这限制了它在实际应用中的实现潜力。在本文中,提出了一种新的基于神经网络的工具,它改进了分数精度运动补偿所需的参考样本的插值。与之前的努力相反,所提出的方法侧重于通过解释网络学习的插值滤波器来降低复杂性。当在多功能视频编码 (VVC) 测试模型中实施该方法时,与基线 VVC 相比,单个序列的 BD 速率节省高达 4.5%,
更新日期:2020-06-12
中文翻译:
解释 CNN 以实现视频编码中的低复杂度学习子像素运动补偿
深度学习在图像和视频压缩任务中显示出巨大的潜力。然而,它以编码复杂性显着增加为代价带来了比特节省,这限制了它在实际应用中的实现潜力。在本文中,提出了一种新的基于神经网络的工具,它改进了分数精度运动补偿所需的参考样本的插值。与之前的努力相反,所提出的方法侧重于通过解释网络学习的插值滤波器来降低复杂性。当在多功能视频编码 (VVC) 测试模型中实施该方法时,与基线 VVC 相比,单个序列的 BD 速率节省高达 4.5%,