当前位置: X-MOL 学术Multimed. Tools Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-paced hybrid dilated convolutional neural networks
Multimedia Tools and Applications ( IF 3.0 ) Pub Date : 2020-09-26 , DOI: 10.1007/s11042-020-09868-5
Wenzhen Zhang , Guangquan Lu , Shichao Zhang , Yonggang Li

Convolutional neural networks (CNNs) can learn the features of samples by supervised manner, and obtain outstanding achievements in many application fields. In order to improve the performance and generalization of CNNs, we propose a self-learning hybrid dilated convolution neural network (SPHDCNN), which can choose relatively reliable samples according to the current learning ability during training. In order to avoid the loss of useful feature map information caused by pooling, we introduce hybrid dilated convolution. In the proposed SPHDCNN, weight is applied to each sample to reflect the easiness of the sample. SPHDCNN employs easier samples for training first, and then adds more difficulty samples gradually according to the current learning ability. It gradually improves its performance by this learning mechanism. Experimental results show SPHDCNN has strong generalization ability, and it achieves more advanced performance compared to the baseline method.



中文翻译:

自定进度混合膨胀卷积神经网络

卷积神经网络(CNN)可以通过监督的方式来学习样本的特征,并在许多应用领域中都取得了杰出的成就。为了提高CNN的性能和通用性,我们提出了一种自学习混合扩张卷积神经网络(SPHDCNN),可以根据训练过程中的当前学习能力选择相对可靠的样本。为了避免池化导致有用特征图信息的丢失,我们引入了混合膨胀卷积。在提出的SPHDCNN中,将权重应用于每个样本以反映样本的难易程度。SPHDCNN首先使用较简单的样本进行训练,然后根据当前的学习能力逐渐增加更多的难度样本。通过这种学习机制,它逐渐提高了性能。

更新日期:2020-09-26
down
wechat
bug