当前位置: X-MOL 学术Mach. Vis. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Plug-and-Play video reconstruction using sparse 3D transform-domain block matching
Machine Vision and Applications ( IF 2.4 ) Pub Date : 2021-04-27 , DOI: 10.1007/s00138-021-01201-w
Vahid Khorasani Ghassab , Nizar Bouguila

In this paper, we propose a novel video reconstruction methodology built based on a generalization of alternating direction method of multipliers (ADMM) named Plug-and-Play. The motivation of the proposed technique is the improvement in visual quality performance of the video frames and decreasing the reconstruction error in comparison with the former video reconstruction methods. The proposed algorithm is an end-to-end embedding tool to integrate video reconstruction techniques with denoiser methods. Correspondingly, we use compressive sensing (CS)-based Gaussian mixture models (GMM) as a sub-problem regarding the proposed framework which is used as a method to model spatiotemporal video patches for video reconstruction. On the other hand, sparse 3D transform-domain block matching is applied as the denoiser of the proposed methodology to remove the remaining artifacts and noise in the reconstructed video frames. Consequently, by considering both online and offline CS-based GMM frameworks, we are able to make two forms of GMM-based embedding video reconstruction algorithms. The outcome has been compared with the result of CS-based GMM algorithm, GAP, TwIST and KSVD-OMP on the same datasets considering PSNR, SSIM, VSNR, WSNR, NQM, UQI, VIF and IFC as the evaluation metrics. It has been experimentally proved that the proposed online and offline GMM-based Plug-and-Play algorithms have more suitable results in comparison with their conventional CS-based online and offline GMM counterparts as well as other state-of-the-art techniques. The general quantitative results (considering all the datasets) for online proposed method regarding PSNR and SSIM metrics are 29.84 and 0.891, respectively, which is higher than the results of other techniques.



中文翻译:

使用稀疏3D变换域块匹配的即插即用视频重建

在本文中,我们提出了一种基于乘法器交替方向方法(ADMM)的即插即用的通用化构建的新颖视频重建方法。与以前的视频重建方法相比,该技术的目的是提高视频帧的视觉质量性能并减少重建误差。所提出的算法是将视频重建技术与降噪器方法集成在一起的端到端嵌入工具。相应地,我们使用基于压缩感知(CS)的高斯混合模型(GMM)作为关于所提出框架的子问题,该子框架用作建模时空视频补丁以进行视频重建的方法。另一方面,稀疏3D变换域块匹配被用作所提出方法的去噪器,以去除重构视频帧中的剩余伪像和噪声。因此,通过考虑在线和离线基于CS的GMM框架,我们能够制作两种形式的基于GMM的嵌入视频重建算法。将结果与基于CS的GMM算法,GAP,TwIST和KSVD-OMP的结果在同一数据集上进行了比较,并以PSNR,SSIM,VSNR,WSNR,NQM,UQI,VIF和IFC作为评估指标。实验证明,与传统的基于CS的在线和离线GMM同类产品以及其他基于CS的在线和离线GMM同类产品相比,该基于在线和离线GMM的即插即用算法具有更合适的结果。通过考虑在线和离线基于CS的GMM框架,我们能够制作两种形式的基于GMM的嵌入视频重建算法。将结果与基于CS的GMM算法,GAP,TwIST和KSVD-OMP的结果在同一数据集上进行了比较,并以PSNR,SSIM,VSNR,WSNR,NQM,UQI,VIF和IFC作为评估指标。实验证明,与传统的基于CS的在线和离线GMM同类产品以及其他基于CS的在线和离线GMM同类产品相比,该基于在线和离线GMM的即插即用算法具有更合适的结果。通过考虑在线和离线基于CS的GMM框架,我们能够制作两种形式的基于GMM的嵌入视频重建算法。将结果与基于CS的GMM算法,GAP,TwIST和KSVD-OMP的结果在同一数据集上进行了比较,并以PSNR,SSIM,VSNR,WSNR,NQM,UQI,VIF和IFC作为评估指标。实验证明,与传统的基于CS的在线和离线GMM同类产品以及其他基于CS的在线和离线GMM同类产品相比,该基于在线和离线GMM的即插即用算法具有更合适的结果。将PSNR,SSIM,VSNR,WSNR,NQM,UQI,VIF和IFC作为评估指标的同一数据集上的TwIST和KSVD-OMP。实验证明,与传统的基于CS的在线和离线GMM同类产品以及其他基于CS的在线和离线GMM同类产品相比,该基于在线和离线GMM的即插即用算法具有更合适的结果。将PSNR,SSIM,VSNR,WSNR,NQM,UQI,VIF和IFC作为评估指标的同一数据集上的TwIST和KSVD-OMP。实验证明,与传统的基于CS的在线和离线GMM同类产品以及其他基于CS的在线和离线GMM同类产品相比,该基于在线和离线GMM的即插即用算法具有更合适的结果。最先进的技术。在线提出的关于PSNR和SSIM度量的方法的总体定量结果(考虑所有数据集)分别为29.84和0.891,高于其他技术的结果。

更新日期:2021-04-28
down
wechat
bug