当前位置: X-MOL 学术Arab. J. Sci. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hybrid Contractive Auto-encoder with Restricted Boltzmann Machine For Multiclass Classification
Arabian Journal for Science and Engineering ( IF 2.9 ) Pub Date : 2021-06-23 , DOI: 10.1007/s13369-021-05674-9
Muhammad Aamir , Nazri Mohd Nawi , Fazli Wahid , Muhammad Sadiq Hasan Zada , M. Z. Rehman , Muhammad Zulqarnain

Contractive auto-encoder (CAE) is a type of auto-encoders and a deep learning algorithm that is based on multilayer training approach. It is considered as one of the most powerful, efficient and robust classification techniques, more specifically feature reduction. The problem independence, easy implementation and intelligence of solving sophisticated problems make it distinct from other deep learning approaches. However, CAE fails in data dimensionality reduction that cause difficulty to capture the useful information within the features space. In order to resolve the issues of CAE, restricted Boltzmann machine (RBM) layers have been integrated with CAE to enhance the dimensionality reduction and a randomized factor for hidden layer parameters. The proposed model has been evaluated on four benchmark variant datasets of MNIST. The results have been compared with four well-known multiclass class classification approaches including standard CAE, RBM, AlexNet and artificial neural network. A considerable amount of improvement has been observed in the performance of proposed model as compared to other classification techniques. The proposed CAE–RBM showed an improvement of 2–4% on MNIST(basic), 9–12% for MNIST(rot), 7–12% for MNIST(bg-rand) and 7–10% for MNIST(bg-img) dataset in term of final accuracy.



中文翻译:

用于多类分类的带受限玻尔兹曼机的混合收缩自动编码器

收缩自动编码器(CAE)是一种自动编码器,是一种基于多层训练方法的深度学习算法。它被认为是最强大、最高效和最稳健的分类技术之一,更具体地说是特征减少。解决复杂问题的问题独立性、易于实施和智能性使其不同于其他深度学习方法。然而,CAE 在数据降维方面失败,导致难以捕获特征空间内的有用信息。为了解决 CAE 的问题,受限玻尔兹曼机 (RBM) 层已与 CAE 集成以增强降维和隐藏层参数的随机因子。所提出的模型已经在 MNIST 的四个基准变体数据集上进行了评估。结果已与四种著名的多类分类方法进行了比较,包括标准 CAE、RBM、AlexNet 和人工神经网络。与其他分类技术相比,已观察到所提出模型的性能有相当大的改进。提出的 CAE-RBM 在 MNIST(basic) 上提高了 2-4%,对 MNIST(rot) 提高了 9-12%,对 MNIST(bg-rand)提高了 7-12%,对 MNIST(bg-rand)提高了 7-10% img) 数据集的最终精度。

更新日期:2021-08-31
down
wechat
bug