当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods
Neural Processing Letters ( IF 3.1 ) Pub Date : 2021-01-07 , DOI: 10.1007/s11063-020-10389-3
Fatemeh Farokhmanesh , Mohammad Taghi Sadeghi

Deep learning is an important subcategory of machine learning approaches in which there is a hope of replacing man-made features with fully automatic extracted features. However, in deep learning, we are generally facing a very high dimensional feature space. This may lead to overfitting problem which is tried to be prevented by applying regularization techniques. In this framework, the sparse representation based feature selection and regularization methods are very attractive. This is because of the nature of the sparse methods which represent a data with as less as possible non-zero coefficients. In this paper, we utilize a variety of sparse representation based methods for regularizing of deep neural networks. For this purpose, first, the effects of three basic sparsity inducing methods are studied. These are the Least Square Regression, Sparse Group Lasso (SGL) and Correntropy inducing Robust Feature Selection (CRFS) methods. Then, in order to improve the regularization process, three combinations of the basic methods are proposed. This study is performed considering a simple fully connected deep neural network and a VGG-like network. Our experimental results show that, overall, the combined methods outperform the basic ones. Considering two important factors of the amount of induced sparsity and classification accuracy, the combination of the CRFS and SGL methods leads to very successful results in deep neural network.



中文翻译:

结合稀疏性诱导特征选择方法的深度神经网络正则化

深度学习是机器学习方法的重要子类别,其中希望用全自动提取的特征替换人造特征。但是,在深度学习中,我们通常面临很高的维特征空间。这可能会导致过度拟合的问题,可以通过应用正则化技术来避免这种问题。在此框架中,基于稀疏表示的特征选择和正则化方法非常有吸引力。这是由于稀疏方法的本质,稀疏方法表示具有尽可能少的非零系数的数据。在本文中,我们利用各种基于稀疏表示的方法对深度神经网络进行正则化。为此,首先,研究了三种基本稀疏诱导方法的效果。这些是最小二乘回归 稀疏组套索(SGL)和熵诱导鲁棒特征选择(CRFS)方法。然后,为了改进正则化过程,提出了三种基本方法的组合。这项研究是考虑到简单的完全连接的深度神经网络和类似VGG的网络进行的。我们的实验结果表明,总体而言,组合方法优于基本方法。考虑到引起稀疏性和分类准确性的两个重要因素,CRFS和SGL方法的组合在深度神经网络中获得了非常成功的结果。这项研究是考虑到简单的完全连接的深度神经网络和类似VGG的网络进行的。我们的实验结果表明,总体而言,组合方法优于基本方法。考虑到引起稀疏性和分类准确性的两个重要因素,CRFS和SGL方法的组合在深度神经网络中获得了非常成功的结果。这项研究是考虑到简单的完全连接的深度神经网络和类似VGG的网络进行的。我们的实验结果表明,总体而言,组合方法优于基本方法。考虑到引起稀疏性和分类准确性的两个重要因素,CRFS和SGL方法的组合在深度神经网络中获得了非常成功的结果。

更新日期:2021-01-07
down
wechat
bug