当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hierarchical Conditional Relation Networks for Multimodal Video Question Answering
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2021-08-27 , DOI: 10.1007/s11263-021-01514-3
Thao Minh Le 1 , Vuong Le 1 , Svetha Venkatesh 1 , Truyen Tran 1
Affiliation  

Video Question Answering (Video QA) challenges modelers in multiple fronts. Modeling video necessitates building not only spatio-temporal models for the dynamic visual channel but also multimodal structures for associated information channels such as subtitles or audio. Video QA adds at least two more layers of complexity – selecting relevant content for each channel in the context of the linguistic query, and composing spatio-temporal concepts and relations hidden in the data in response to the query. To address these requirements, we start with two insights: (a) content selection and relation construction can be jointly encapsulated into a conditional computational structure, and (b) video-length structures can be composed hierarchically. For (a) this paper introduces a general-reusable reusable neural unit dubbed Conditional Relation Network (CRN) taking as input a set of tensorial objects and translating into a new set of objects that encode relations of the inputs. The generic design of CRN helps ease the common complex model building process of Video QA by simple block stacking and rearrangements with flexibility in accommodating diverse input modalities and conditioning features across both visual and linguistic domains. As a result, we realize insight (b) by introducing Hierarchical Conditional Relation Networks (HCRN) for Video QA. The HCRN primarily aims at exploiting intrinsic properties of the visual content of a video as well as its accompanying channels in terms of compositionality, hierarchy, and near-term and far-term relation. HCRN is then applied for Video QA in two forms, short-form where answers are reasoned solely from the visual content of a video, and long-form where an additional associated information channel, such as movie subtitles, presented. Our rigorous evaluations show consistent improvements over state-of-the-art methods on well-studied benchmarks including large-scale real-world datasets such as TGIF-QA and TVQA, demonstrating the strong capabilities of our CRN unit and the HCRN for complex domains such as Video QA. To the best of our knowledge, the HCRN is the very first method attempting to handle long and short-form multimodal Video QA at the same time.



中文翻译:

多模态视频问答的分层条件关系网络

视频问答(Video QA)在多个方面挑战建模者。视频建模不仅需要为动态视觉通道构建时空模型,还需要为相关信息通道(如字幕或音频)构建多模态结构。视频 QA 至少增加了两层复杂性——在语言查询的上下文中为每个通道选择相关内容,并根据查询组合隐藏在数据中的时空概念和关系。为了满足这些要求,我们从两个见解开始:(a)内容选择和关系构建可以联合封装到条件计算结构中,(b)视频长度结构可以分层组成。对于 (a),本文介绍了一种称为条件关系网络 (CRN) 的通用可重用可重用神经单元,将一组张量对象作为输入并转换为一组新的对象,这些对象对输入的关系进行编码。CRN 的通用设计通过简单的块堆叠和重新排列有助于简化视频 QA 常见的复杂模型构建过程,并灵活地适应视觉和语言领域的不同输入模式和调节功能。因此,我们通过为视频 QA 引入分层条件关系网络 (HCRN) 来实现洞察 (b)。HCRN 主要旨在利用视频的视觉内容的内在属性及其在组合性、层次结构以及近期和远期关系方面的伴随渠道。然后 HCRN 以两种形式应用于视频 QA,一种是仅根据视频的视觉内容推断答案的短形式,另一种是呈现附加相关信息通道(例如电影字幕)的长形式。我们的严格评估表明,在经过充分研究的基准测试(包括 TGIF-QA 和 TVQA 等大规模真实世界数据集)上,与最先进的方法相比取得了持续改进,证明了我们的 CRN 单元和 HCRN 在复杂领域的强大功能例如视频质量检查。据我们所知,HCRN 是第一个尝试同时处理长格式和短格式多模态视频 QA 的方法。我们的严格评估表明,在经过充分研究的基准测试(包括 TGIF-QA 和 TVQA 等大规模真实世界数据集)上,与最先进的方法相比取得了持续改进,证明了我们的 CRN 单元和 HCRN 在复杂领域的强大功能例如视频质量检查。据我们所知,HCRN 是第一个尝试同时处理长格式和短格式多模态视频 QA 的方法。我们的严格评估表明,在经过充分研究的基准测试(包括 TGIF-QA 和 TVQA 等大规模真实世界数据集)上,与最先进的方法相比取得了持续改进,证明了我们的 CRN 单元和 HCRN 在复杂领域的强大功能例如视频质量检查。据我们所知,HCRN 是第一个尝试同时处理长格式和短格式多模态视频 QA 的方法。

更新日期:2021-08-27
down
wechat
bug