当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An analysis of the impact of subsampling on the neural network error surface
Neurocomputing ( IF 5.5 ) Pub Date : 2021-09-20 , DOI: 10.1016/j.neucom.2021.09.023
Cody Dennis 1 , Andries Engelbrecht 2, 3 , Beatrice M. Ombuki-Berman 1
Affiliation  

This paper empirically analyses the impact of changes to the set of training examples on the neural network error surface. Specific quantitative characteristics of the error surface related to properties such as ruggedness, modality and structure are measured and visualized as the set of training examples changes. Both the case of random subsampling and active learning from a fixed dataset are examined, producing ten different training scenarios. For each scenario eleven error surface characteristics are calculated for five common benchmark problems. The results demonstrate that the error surface characteristics calculated using only a subsample of the available data commonly do not generalize to that of the full dataset. The observed error surface characteristics are significantly impacted by the particular set of examples used to calculate error. Some error surface characteristics are significantly altered by small changes in the set of examples used to calculate error. The main finding from this study is that when the set of training examples may change during training the training of a neural network is in essence a dynamic optimization problem, suggesting that optimization algorithms developed specifically to solve dynamic optimization problems may be more efficient at training neural networks under such conditions.



中文翻译:

子采样对神经网络误差面的影响分析

本文实证分析了训练样例集的变化对神经网络误差面的影响。随着训练示例集的变化,测量和可视化与坚固性、模态和结构等属性相关的误差表面的特定定量特征。检查了随机子采样和从固定数据集主动学习的情况,产生了十种不同的训练场景。对于每个场景,为五个常见的基准问题计算了十一个错误表面特征。结果表明,仅使用可用数据的子样本计算的误差表面特征通常不能推广到完整数据集的特征。观察到的误差表面特征受用于计算误差的特定示例集的显着影响。一些误差表面特征会因用于计算误差的示例集的微小变化而显着改变。本研究的主要发现是,当训练样本集在训练过程中可能发生变化时,神经网络的训练本质上是一个动态优化问题,这表明专门为解决动态优化问题而开发的优化算法可能更有效地训练神经网络。这种条件下的网络。

更新日期:2021-10-06
down
wechat
bug