当前位置: X-MOL 学术IEEE Lat. Am. Trans. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Comparative study of methods to obtain the number of hidden neurons of an auto-encoder in a high-dimensionality context
IEEE Latin America Transactions ( IF 1.3 ) Pub Date : 2021-04-12 , DOI: 10.1109/tla.2020.9400448
Hector R. Vega-Gutierrez 1 , Carlos Castorena 1 , Roberto Alejo 1 , Everardo E. Granda-Gutierrez 2
Affiliation  

Fourteen formulas from the state-of-art were used in this paper to find the optimal number of neurons in the hidden layer of an autoencoder neural network. The latter is employed to reduce the dataset dimension on high-dimensionality scenarios with not significant reduction in classification accuracy in comparison to the use of the whole dataset. A Deep Learning neural network was employed to analyze the effectiveness of the studied formulas in classification terms (accuracy). Eight high-dimensional datasets were processed in an experimental set in order to assess this proposal. Results presented in this work show that formula 13 (used to find the number of hidden neurons of the auto-encoder) is effective to reduce the data dimensionality without a statistically significant reduction of the classification performance, as it was confirmed by the Freidman test and the Holm's post-hoc test.

中文翻译:

在高维上下文中获取自动编码器隐藏神经元数量的方法的比较研究

本文使用了来自最新技术的14个公式,以在自动编码器神经网络的隐藏层中找到最佳神经元数量。后者用于减少高维场景下的数据集维数,与使用整个数据集相比,分类精度没有明显降低。深度学习神经网络被用于以分类术语(准确性)分析所研究公式的有效性。为了评估该建议,在实验集中处理了八个高维数据集。这项工作提出的结果表明,公式13(用于查找自动编码器的隐藏神经元数量)可有效降低数据维度,而不会显着降低分类性能,
更新日期:2021-04-13
down
wechat
bug