当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ChipQA: No-Reference Video Quality Prediction via Space-Time Chips
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2021-09-17 , DOI: 10.1109/tip.2021.3112055
Joshua Peter Ebenezer , Zaixi Shang , Yongjun Wu , Hai Wei , Sriram Sethuraman , Alan C. Bovik

We propose a new model for no-reference video quality assessment (VQA). Our approach uses a new idea of highly-localized space-time (ST) slices called Space-Time Chips (ST Chips). ST Chips are localized cuts of video data along directions that implicitly capture motion. We use perceptually-motivated bandpass and normalization models to first process the video data, and then select oriented ST Chips based on how closely they fit parametric models of natural video statistics. We show that the parameters that describe these statistics can be used to reliably predict the quality of videos, without the need for a reference video. The proposed method implicitly models ST video naturalness, and deviations from naturalness. We train and test our model on several large VQA databases, and show that our model achieves state-of-the-art performance at reduced cost, without requiring motion computation.

中文翻译:


ChipQA:通过时空芯片进行无参考视频质量预测



我们提出了一种新的无参考视频质量评估(VQA)模型。我们的方法使用了一种高度局部化的时空(ST)切片的新概念,称为时空芯片(ST 芯片)。 ST 芯片是沿隐式捕捉运动的方向对视频数据进行局部剪切。我们使用感知驱动的带通和归一化模型首先处理视频数据,然后根据它们与自然视频统计参数模型的拟合程度来选择定向 ST 芯片。我们证明,描述这些统计数据的参数可用于可靠地预测视频质量,而无需参考视频。所提出的方法隐式地对 ST 视频自然度和自然度偏差进行建模。我们在几个大型 VQA 数据库上训练和测试我们的模型,并表明我们的模型以较低的成本实现了最先进的性能,而不需要运动计算。
更新日期:2021-09-17
down
wechat
bug