当前位置: X-MOL 学术Displays › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatiotemporal just noticeable difference modeling with heterogeneous temporal visual features
Displays ( IF 3.7 ) Pub Date : 2021-10-06 , DOI: 10.1016/j.displa.2021.102096
Yafen Xing 1 , Haibing Yin 1 , Yang Zhou 1 , Yong Chen 2 , Chenggang Yan 1
Affiliation  

Developing accurate Just-noticeable difference (JND) models are challenged by complicated HVS characteristics and nonstationary features of video sequence. Great efforts have been devoted to JND modeling, and inspiring performance improvements are witnessed in the literature, especially spatial JND models. However, there are not only urgent requirement but also technical potentiality for improving temporal JND models fully accounting for the temporal perception characteristics. In terms of temporal JND modeling, there are two challenges, one is how to extract perceptual feature parameters of source video, and the other is how to quantitatively characterize the interaction relationship between feature parameters and HVS characteristics? Firstly, this work extracts content-aware temporal feature parameters having predominate impacts on vision perception, including motion (foreground/background), pixel-correspondence duration and inter-frame residue fluctuation intensity along temporal trajectory, and investigates the HVS responses to these four heterogeneous feature parameters. Secondly, this work proposes respective probability density functions (PDF) in the perception sense to quantitatively depict the attention and suppression perception responses of feature parameters, accounting for the temporal perception characteristics. Using these PDF models, we fuse the heterogeneous feature parameters from the viewpoint of uniform dimension,i.e. self-information measured visual attention and information entropy measured masking uncertainty, achieving heterogeneous parameter homogenization. Thirdly, with self-information and entropy results, this work then proposes a temporal weight model, by striking the balance between visual attention and masking suppression, to adjust the spatial JND threshold, and then develops the improved spatiotemporal JND model. Intensive simulation results verity the effectiveness of the proposed spatiotemporal JND profile, with competitive model accuracy compared with the-state-of-the-art candidate models.



中文翻译:

具有异构时间视觉特征的时空刚显着差异建模

开发准确的可注意到差异 (JND) 模型面临着复杂的 HVS 特征和视频序列的非平稳特征的挑战。在 JND 建模方面付出了巨大的努力,并且在文献中见证了鼓舞人心的性能改进,尤其是空间 JND 模型。然而,充分考虑时间感知特征的时间JN​​D模型不仅有迫切的需求,而且还有技术潜力。在时间JND建模方面,有两个挑战,一是如何提取源视频的感知特征参数,二是如何量化表征特征参数与HVS特征的交互关系?首先,这项工作提取了对视觉感知有主要影响的内容感知时间特征参数,包括运动(前景/背景)、像素对应持续时间和沿时间轨迹的帧间残留波动强度,并研究了 HVS 对这四个异构特征参数的响应. 其次,这项工作提出了感知意义上的相应概率密度函数(PDF),以定量描述特征参数的注意和抑制感知响应,考虑时间感知特征。使用这些PDF模型,我们从统一维度的角度融合异构特征参数,即自信息测量视觉注意力和信息熵测量掩蔽不确定性,实现异构参数同质化。第三,利用自我信息和熵结果,本文提出了时间权重模型,通过平衡视觉注意力和掩蔽抑制来调整空间 JND 阈值,然后开发改进的时空 JND 模型。密集的仿真结果验证了所提出的时空 JND 配置文件的有效性,与最先进的候选模型相比,具有具有竞争力的模型准确性。

更新日期:2021-10-12
down
wechat
bug