当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive Spatio-Temporal Graph Enhanced Vision-Language Representation for Video QA
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2021-05-05 , DOI: 10.1109/tip.2021.3076556
Weike Jin , Zhou Zhao , Xiaochun Cao , Jieming Zhu , Xiuqiang He , Yueting Zhuang

Vision-language research has become very popular, which focuses on understanding of visual contents, language semantics and relationships between them. Video question answering (Video QA) is one of the typical tasks. Recently, several BERT style pre-training methods have been proposed and shown effectiveness on various vision-language tasks. In this work, we leverage the successful vision-language transformer structure to solve the Video QA problem. However, we do not pre-train it with any video data, because video pre-training requires massive computing resources and is hard to perform with only a few GPUs. Instead, our work aims to leverage image-language pre-training to help with video-language modeling, by sharing a common module design. We further introduce an adaptive spatio-temporal graph to enhance the vision-language representation learning. That is, we adaptively refine the spatio-temporal tubes of salient objects according to their spatio-temporal relations learned through a hierarchical graph convolution process. Finally, we can obtain a number of fine-grained tube-level video object representations, as the visual inputs of the vision-language transformer module. Experiments on three widely used Video QA datasets show that our model achieves the new state-of-the-art results.

中文翻译:

用于视频 QA 的自适应时空图增强视觉语言表示

视觉语言研究已经变得非常流行,它侧重于理解视觉内容、语言语义以及它们之间的关系。视频问答(Video QA)是典型的任务之一。最近,已经提出了几种 BERT 风格的预训练方法,并在各种视觉语言任务上显示出有效性。在这项工作中,我们利用成功的视觉语言转换器结构来解决视频 QA 问题。但是,我们没有使用任何视频数据对其进行预训练,因为视频预训练需要大量的计算资源,并且仅使用少量 GPU 很难执行。相反,我们的工作旨在通过共享通用模块设计,利用图像语言预训练来帮助进行视频语言建模。我们进一步引入了自适应时空图来增强视觉语言表示学习。也就是说,我们根据通过分层图卷积过程学习到的时空关系自适应地细化显着对象的时空管。最后,我们可以获得许多细粒度的管级视频对象表示,作为视觉语言转换器模块的视觉输入。在三个广泛使用的视频 QA 数据集上的实验表明,我们的模型达到了新的最先进的结果。作为视觉语言转换器模块的视觉输入。在三个广泛使用的视频 QA 数据集上的实验表明,我们的模型实现了新的最先进的结果。作为视觉语言转换器模块的视觉输入。在三个广泛使用的视频 QA 数据集上的实验表明,我们的模型达到了新的最先进的结果。
更新日期:2021-06-15
down
wechat
bug