当前位置: X-MOL 学术arXiv.cs.MM › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Contextual Transformer Networks for Visual Recognition
arXiv - CS - Multimedia Pub Date : 2021-07-26 , DOI: arxiv-2107.12292
Yehao Li, Ting Yao, Yingwei Pan, Tao Mei

Transformer with self-attention has led to the revolutionizing of natural language processing field, and recently inspires the emergence of Transformer-style architecture design with competitive results in numerous computer vision tasks. Nevertheless, most of existing designs directly employ self-attention over a 2D feature map to obtain the attention matrix based on pairs of isolated queries and keys at each spatial location, but leave the rich contexts among neighbor keys under-exploited. In this work, we design a novel Transformer-style module, i.e., Contextual Transformer (CoT) block, for visual recognition. Such design fully capitalizes on the contextual information among input keys to guide the learning of dynamic attention matrix and thus strengthens the capacity of visual representation. Technically, CoT block first contextually encodes input keys via a $3\times3$ convolution, leading to a static contextual representation of inputs. We further concatenate the encoded keys with input queries to learn the dynamic multi-head attention matrix through two consecutive $1\times1$ convolutions. The learnt attention matrix is multiplied by input values to achieve the dynamic contextual representation of inputs. The fusion of the static and dynamic contextual representations are finally taken as outputs. Our CoT block is appealing in the view that it can readily replace each $3\times3$ convolution in ResNet architectures, yielding a Transformer-style backbone named as Contextual Transformer Networks (CoTNet). Through extensive experiments over a wide range of applications (e.g., image recognition, object detection and instance segmentation), we validate the superiority of CoTNet as a stronger backbone. Source code is available at \url{https://github.com/JDAI-CV/CoTNet}.

中文翻译:

用于视觉识别的上下文转换器网络

具有自注意力的 Transformer 引发了自然语言处理领域的革命,最近还激发了 Transformer 式架构设计的出现,并在众多计算机视觉任务中取得了具有竞争力的结果。尽管如此,大多数现有设计直接在 2D 特征图上使用自注意力来获得基于每个空间位置的孤立查询和键对的注意力矩阵,但未充分利用相邻键之间的丰富上下文。在这项工作中,我们设计了一个新颖的 Transformer 风格的模块,即 Contextual Transformer (CoT) 块,用于视觉识别。这种设计充分利用输入键之间的上下文信息来指导动态注意力矩阵的学习,从而增强视觉表示能力。从技术上讲,CoT 块首先通过 $3\times3$ 卷积对输入键进行上下文编码,从而产生输入的静态上下文表示。我们进一步将编码的键与输入查询连接起来,通过两个连续的 $1\times1$ 卷积来学习动态多头注意力矩阵。学习到的注意力矩阵乘以输入值以实现输入的动态上下文表示。静态和动态上下文表示的融合最终作为输出。我们的 CoT 块很有吸引力,因为它可以轻松替换 ResNet 架构中的每个 $3\times3$ 卷积,产生一个名为 Contextual Transformer Networks (CoTNet) 的 Transformer 风格的主干。通过对广泛应用(例如,图像识别、对象检测和实例分割)的广泛实验,我们验证了 CoTNet 作为更强大的骨干网的优越性。源代码可在 \url{https://github.com/JDAI-CV/CoTNet} 获得。
更新日期:2021-07-27
down
wechat
bug