当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Few-shot learning with deep balanced network and acceleration strategy
International Journal of Machine Learning and Cybernetics ( IF 3.1 ) Pub Date : 2021-07-25 , DOI: 10.1007/s13042-021-01373-x
Kang Wang 1, 2 , Xuesong Wang 1, 2 , Yuhu Cheng 1, 2 , Tong Zhang 3
Affiliation  

Deep networks are widely used in few-shot learning methods, but deep networks suffer from large-scale network parameters and computational effort. Aiming at the above problems, we present a novel few-shot learning method with deep balanced network and acceleration strategy. Firstly, a series of simple linear operations are applied to few original features to obtain the more features. More features are obtained with fewer parameters, thus reducing the network parameters and computational effort. Then the local cross-channel interaction mechanism without dimensionality reduction is used to further improve the performance with nearly no increase in parameters and computational effort, so as to obtain a deep balanced network to balance performance, parameters, and computational effort. Finally, an acceleration strategy is designed to solve the problem that the gradient update in the deep network takes a tremendous amount of time in new tasks, speeding up the adaptation process. The experimental results of traditional and fine-grained image classification show that the few-shot learning method with deep balanced network can achieve or even exceed the classification accuracy of some existing methods with fewer network parameters and computational effort. The cross-domain experiments further demonstrate the advantages of the method above the domain shift. Simultaneously, the time required for classification in new tasks can be significantly decreased by using the acceleration strategy.



中文翻译:

具有深度平衡网络和加速策略的小样本学习

深度网络广泛用于小样本学习方法,但深度网络受到大规模网络参数和计算量的影响。针对上述问题,我们提出了一种具有深度平衡网络和加速策略的新型小样本学习方法。首先,对少数原始特征应用一系列简单的线性运算,以获得更多的特征。用更少的参数获得更多的特征,从而减少网络参数和计算量。然后使用没有降维的本地跨通道交互机制,在几乎不增加参数和计算量的情况下进一步提高性能,从而获得深度平衡的网络来平衡性能、参数和计算量。最后,加速策略旨在解决深度网络中梯度更新在新任务中花费大量时间的问题,加快了适应过程。传统和细粒度图像分类的实验结果表明,具有深度平衡网络的小样本学习方法可以以较少的网络参数和计算量达到甚至超过一些现有方法的分类精度。跨域实验进一步证明了该方法在域转移之上的优势。同时,使用加速策略可以显着减少新任务中分类所需的时间。传统和细粒度图像分类的实验结果表明,具有深度平衡网络的小样本学习方法可以以较少的网络参数和计算量达到甚至超过一些现有方法的分类精度。跨域实验进一步证明了该方法在域转移之上的优势。同时,使用加速策略可以显着减少新任务中分类所需的时间。传统和细粒度图像分类的实验结果表明,具有深度平衡网络的小样本学习方法可以以较少的网络参数和计算量达到甚至超过一些现有方法的分类精度。跨域实验进一步证明了该方法在域转移之上的优势。同时,使用加速策略可以显着减少新任务中分类所需的时间。跨域实验进一步证明了该方法在域转移之上的优势。同时,使用加速策略可以显着减少新任务中分类所需的时间。跨域实验进一步证明了该方法在域转移之上的优势。同时,使用加速策略可以显着减少新任务中分类所需的时间。

更新日期:2021-07-25
down
wechat
bug