当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Transferable Feature Representation with Swin Transformer for Object Recognition
Neural Processing Letters ( IF 2.6 ) Pub Date : 2022-08-27 , DOI: 10.1007/s11063-022-11004-3
Jian-Xin Ren , Yu-Jie Xiong , Xi-Jiong Xie , Yu-Fan Dai

Recent, substantial advancements in deep learning technologies have driven the flourishing of computer vision. However, the heavy dependence on the scale of training data limits deep learning applications because it is generally hard to obtain such a large number of data in many practical scenarios. And, deep learning seems to offer no significant advantage compared with traditional machine methods in a lack of sufficient training data. The proposed approach in this paper overcomes the problem of insufficient training data by taking Swin Transformer as the backbone for feature extraction and performing the fine-tuning strategies on the target dataset for learning transferable feature representation. Our experimental results demonstrate that the proposed method has a good performance for object recognition on small-scale datasets.



中文翻译:

使用 Swin Transformer 学习可迁移特征表示以进行对象识别

最近,深度学习技术的重大进步推动了计算机视觉的蓬勃发展。然而,对训练数据规模的严重依赖限制了深度学习的应用,因为在许多实际场景中通常很难获得如此大量的数据。而且,在缺乏足够的训练数据的情况下,与传统的机器方法相比,深度学习似乎没有提供明显的优势。本文提出的方法克服了训练数据不足的问题,以 Swin Transformer 作为特征提取的主干,并在目标数据集上执行微调策略以学习可迁移的特征表示。我们的实验结果表明,该方法在小规模数据集上具有良好的目标识别性能。

更新日期:2022-08-29
down
wechat
bug