当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Splicing learning: A novel few-shot learning approach
Information Sciences ( IF 8.1 ) Pub Date : 2020-11-27 , DOI: 10.1016/j.ins.2020.11.028
Lianting Hu , Huiying Liang , Long Lu

In recent years, among most approaches for few-shot learning, there exists a default premise that a big homogeneous-annotated dataset is applied to pre-train the few-shot learning model. However, since few-shot learning approaches are always used in the domain where annotated samples are rare, it would be difficult to collect another big annotated dataset in the same domain. Therefore, we propose Splicing Learning to complete the few-shot learning task without the help of a big homogeneous-annotated dataset. Splicing Learning can increase the sample size of the few-shot set by splicing multiple original images to a spliced-image. Unlike data augmentation technologies, there is no false information on the spliced-image. Through experiments, we find that the configuration “All-splice + WSG” can achieve the best test accuracy of 90.81%, 9.19% better than the baseline. The performance improvement of the model can be attributed to Splicing Learning mostly and has little to do with the complexity of the CNN framework. Compared with metric learning, meta-learning, and GAN models, both of Splicing Learning and data augmentation have achieved more outstanding performance. At the same time, the combination of Splicing Learning and data augmentation can further improve the test accuracy of the model to 96.33%. The full implementation is available at https://github.com/xiangxiangzhuyi/Splicing-learning.



中文翻译:

拼接学习:一种新颖的少拍学习方法

近年来,在大多数用于几次射击学习的方法中,存在一个默认前提,即使用大的均质注释数据集来预训练几次射击学习模型。但是,由于很少在带注释的样本很少的领域中使用一次性学习方法,因此很难在同一域中收集另一个大的带注释的数据集。因此,我们建议使用拼接学习来完成少量学习任务,而无需借助大型均质注释数据集。拼接学习可以通过将多个原始图像拼接到一个拼接图像上来增加少量拍摄的样本量。与数据增强技术不同,在拼接图像上没有虚假信息。通过实验,我们发现“ All-splice + WSG”配置可以达到90.81%,9的最佳测试精度。比基准好19%。该模型的性能改进主要归因于Splicing Learning,而与CNN框架的复杂性无关。与度量学习,元学习和GAN模型相比,“拼接学习”和“数据增强”均取得了更加出色的性能。同时,将拼接学习与数据增强相结合可以将模型的测试准确度进一步提高到96.33%。完整的实现可从https://github.com/xiangxiangzhuyi/Splicing-learning获得。拼接学习和数据扩充都取得了更出色的表现。同时,将拼接学习与数据增强相结合可以将模型的测试准确度进一步提高到96.33%。完整的实现可从https://github.com/xiangxiangzhuyi/Splicing-learning获得。拼接学习和数据扩充都取得了更出色的表现。同时,将拼接学习与数据增强相结合可以将模型的测试准确度进一步提高到96.33%。完整的实现可从https://github.com/xiangxiangzhuyi/Splicing-learning获得。

更新日期:2020-12-21
down
wechat
bug