当前位置: X-MOL 学术IEEE Trans. Circ. Syst. Video Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Blind Omnidirectional Image Quality Assessment with Viewport Oriented Graph Convolutional Networks
IEEE Transactions on Circuits and Systems for Video Technology ( IF 8.3 ) Pub Date : 2021-01-01 , DOI: 10.1109/tcsvt.2020.3015186
Jiahua Xu , Wei Zhou , Zhibo Chen

Quality assessment of omnidirectional images has become increasingly urgent due to the rapid growth of virtual reality applications. Different from traditional 2D images and videos, omnidirectional contents can provide consumers with freely changeable viewports and a larger field of view covering the $360^{\circ}\times180^{\circ}$ spherical surface, which makes the objective quality assessment of omnidirectional images more challenging. In this paper, motivated by the characteristics of the human vision system (HVS) and the viewing process of omnidirectional contents, we propose a novel Viewport oriented Graph Convolution Network (VGCN) for blind omnidirectional image quality assessment (IQA). Generally, observers tend to give the subjective rating of a 360-degree image after passing and aggregating different viewports information when browsing the spherical scenery. Therefore, in order to model the mutual dependency of viewports in the omnidirectional image, we build a spatial viewport graph. Specifically, the graph nodes are first defined with selected viewports with higher probabilities to be seen, which is inspired by the HVS that human beings are more sensitive to structural information. Then, these nodes are connected by spatial relations to capture interactions among them. Finally, reasoning on the proposed graph is performed via graph convolutional networks. Moreover, we simultaneously obtain global quality using the entire omnidirectional image without viewport sampling to boost the performance according to the viewing experience. Experimental results demonstrate that our proposed model outperforms state-of-the-art full-reference and no-reference IQA metrics on two public omnidirectional IQA databases.

中文翻译:

使用面向视口的图卷积网络进行盲全向图像质量评估

由于虚拟现实应用的快速增长,全方位图像的质量评估变得越来越紧迫。不同于传统的2D图像和视频,全方位的内容可以为消费者提供自由变化的视口和更大的视野覆盖$360^{\circ}\times180^{\circ}$球面,使得全方位的客观质量评估图像更具挑战性。在本文中,受人类视觉系统(HVS)的特性和全方位内容的观看过程的启发,我们提出了一种新的面向视口的图卷积网络(VGCN),用于盲全向图像质量评估(IQA)。一般来说,观察者在浏览球形风景时,往往会在传递和聚合不同视口信息后对360度图像进行主观评价。因此,为了模拟全向图像中视口的相互依赖关系,我们构建了一个空间视口图。具体来说,首先用选定的具有更高被看到概率的视口定义图节点,这是受到人类对结构信息更敏感的 HVS 的启发。然后,这些节点通过空间关系连接以捕获它们之间的交互。最后,通过图卷积网络对所提出的图进行推理。此外,我们在没有视口采样的情况下使用整个全向图像同时获得全局质量,以根据观看体验提高性能。
更新日期:2021-01-01
down
wechat
bug