当前位置: X-MOL 学术J. Real-Time Image Proc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Convolution neural network with low operation FLOPS and high accuracy for image recognition
Journal of Real-Time Image Processing ( IF 2.9 ) Pub Date : 2021-06-19 , DOI: 10.1007/s11554-021-01140-9
Shih-Chang Hsia , Szu-Hong Wang , Chuan-Yu Chang

The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a good solution to prevent the loss of information, but it requires a huge amount of parameters for deeper layer operations. In this study, the fast computational algorithm is proposed to reduce the parameters and to save the operations with the modification of DenseNet deep layer block. With channel merging procedures, this solution can reduce the dilemma of multiple growth of the parameter quantity for deeper layer. This approach is not only to reduce the parameters and FLOPs, but also to keep high accuracy. Comparisons with the original DenseNet and RetNet-110, the parameters can be efficiency reduced about 30–70%, while the accuracy degrades little. The lightweight network can be implemented on a low-cost embedded system for real-time application.



中文翻译:

具有低运算FLOPS和高精度的卷积神经网络用于图像识别

卷积神经网络变得更深更宽以获得更好的准确性,但需要更高的计算量。当神经网络越深入,一些信息丢失的就越多。为了改善这个缺点,开发了残差结构来连接前几层的信息。这是一个很好的防止信息丢失的解决方案,但它需要大量的参数来进行更深层的操作。在这项研究中,提出了一种快速计算算法,通过修改 DenseNet 深层块来减少参数并节省操作。通过通道合并程序,该解决方案可以减少更深层次参数量多重增长的困境。这种方法不仅可以减少参数和FLOPs,而且可以保持较高的准确性。与原始的 DenseNet 和 RetNet-110 相比,参数可以效率降低约 30-70%,而精度下降很少。轻量级网络可以在低成本的嵌入式系统上实现,用于实时应用。

更新日期:2021-06-19
down
wechat
bug