当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DPT: Deformable Patch-based Transformer for Visual Recognition
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-07-30 , DOI: arxiv-2107.14467
Zhiyang Chen, Yousong Zhu, Chaoyang Zhao, Guosheng Hu, Wei Zeng, Jinqiao Wang, Ming Tang

Transformer has achieved great success in computer vision, while how to split patches in an image remains a problem. Existing methods usually use a fixed-size patch embedding which might destroy the semantics of objects. To address this problem, we propose a new Deformable Patch (DePatch) module which learns to adaptively split the images into patches with different positions and scales in a data-driven way rather than using predefined fixed patches. In this way, our method can well preserve the semantics in patches. The DePatch module can work as a plug-and-play module, which can easily be incorporated into different transformers to achieve an end-to-end training. We term this DePatch-embedded transformer as Deformable Patch-based Transformer (DPT) and conduct extensive evaluations of DPT on image classification and object detection. Results show DPT can achieve 81.9% top-1 accuracy on ImageNet classification, and 43.7% box mAP with RetinaNet, 44.3% with Mask R-CNN on MSCOCO object detection. Code has been made available at: https://github.com/CASIA-IVA-Lab/DPT .

中文翻译:

DPT:用于视觉识别的基于可变形补丁的变压器

Transformer 在计算机视觉方面取得了巨大的成功,而如何在图像中分割补丁仍然是一个问题。现有方法通常使用固定大小的补丁嵌入,这可能会破坏对象的语义。为了解决这个问题,我们提出了一个新的可变形补丁(DePatch)模块,该模块学习以数据驱动的方式自适应地将图像分割成具有不同位置和比例的补丁,而不是使用预定义的固定补丁。这样,我们的方法可以很好地保留补丁中的语义。DePatch 模块可以作为即插即用模块工作,可以轻松集成到不同的 Transformer 中以实现端到端的训练。我们将这种嵌入 DePatch 的转换器称为基于可变形补丁的转换器 (DPT),并在图像分类和对象检测方面对 DPT 进行广泛评估。结果表明,DPT 在 ImageNet 分类上可以达到 81.9% 的 top-1 准确率,在使用 RetinaNet 时可以达到 43.7% 的 box mAP,在 MSCOCO 对象检测上使用 Mask R-CNN 时可以达到 44.3%。代码已在以下位置提供:https://github.com/CASIA-IVA-Lab/DPT。
更新日期:2021-08-02
down
wechat
bug