当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DeepQTMT: A Deep Learning Approach for Fast QTMT-Based CU Partition of Intra-Mode VVC
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2021-05-31 , DOI: 10.1109/tip.2021.3083447
Tianyi Li , Mai Xu , Runzhi Tang , Ying Chen , Qunliang Xing

Versatile Video Coding (VVC), as the latest standard, significantly improves the coding efficiency over its predecessor standard High Efficiency Video Coding (HEVC), but at the expense of sharply increased complexity. In VVC, the quad-tree plus multi-type tree (QTMT) structure of the coding unit (CU) partition accounts for over 97% of the encoding time, due to the brute-force search for recursive rate-distortion (RD) optimization. Instead of the brute-force QTMT search, this paper proposes a deep learning approach to predict the QTMT-based CU partition, for drastically accelerating the encoding process of intra-mode VVC. First, we establish a large-scale database containing sufficient CU partition patterns with diverse video content, which can facilitate the data-driven VVC complexity reduction. Next, we propose a multi-stage exit CNN (MSE-CNN) model with an early-exit mechanism to determine the CU partition, in accord with the flexible QTMT structure at multiple stages. Then, we design an adaptive loss function for training the MSE-CNN model, synthesizing both the uncertain number of split modes and the target on minimized RD cost. Finally, a multi-threshold decision scheme is developed, achieving a desirable trade-off between complexity and RD performance. The experimental results demonstrate that our approach can reduce the encoding time of VVC by 44.65%~66.88% with a negligible Bjøntegaard delta bit-rate (BD-BR) of 1.322%~3.188%, significantly outperforming other state-of-the-art approaches.

中文翻译:


DeepQTMT:一种基于 QTMT 的帧内 VVC CU 快速划分的深度学习方法



通用视频编码(VVC)作为最新标准,较前身标准高效视频编码(HEVC)显着提高了编码效率,但代价是复杂度急剧增加。在 VVC 中,由于递归率失真 (RD) 优化的强力搜索,编码单元 (CU) 分区的四叉树加多类型树 (QTMT) 结构占编码时间的 97% 以上。本文提出了一种深度学习方法来预测基于 QTMT 的 CU 分区,而不是强力 QTMT 搜索,以大大加速帧内模式 VVC 的编码过程。首先,我们建立一个包含足够的具有不同视频内容的CU分区模式的大规模数据库,这可以促进数据驱动的VVC复杂性降低。接下来,我们提出了一种多阶段退出CNN(MSE-CNN)模型,具有提前退出机制来确定CU分区,符合多阶段灵活的QTMT结构。然后,我们设计了一个自适应损失函数来训练 MSE-CNN 模型,综合了不确定的分割模式数量和最小化 RD 成本的目标。最后,开发了多阈值决策方案,在复杂性和 RD 性能之间实现了理想的权衡。实验结果表明,我们的方法可以将 VVC 的编码时间减少 44.65%~66.88%,Bjøntegaard delta 比特率(BD-BR)为 1.322%~3.188%,可以忽略不计,明显优于其他最先进的方法接近。
更新日期:2021-05-31
down
wechat
bug