当前位置: X-MOL 学术IEEE Trans. Fuzzy Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch Normalization
IEEE Transactions on Fuzzy Systems ( IF 11.9 ) Pub Date : 2020-12-01 , DOI: 10.1109/tfuzz.2020.2967282
Yuqi Cui , Dongrui Wu , Jian Huang

Takagi–Sugeno–Kang (TSK) fuzzy systems are flexible and interpretable machine learning models; however, they may not be easily optimized when the data size is large, and/or the data dimensionality is high. This article proposes a minibatch gradient descent (MBGD) based algorithm to efficiently and effectively train TSK fuzzy classifiers. It integrates two novel techniques: First, uniform regularization (UR), which forces the rules to have similar average contributions to the output, and hence to increase the generalization performance of the TSK classifier; and, second, batch normalization (BN), which extends BN from deep neural networks to TSK fuzzy classifiers to expedite the convergence and improve the generalization performance. Experiments on 12 UCI datasets from various application domains, with varying size and dimensionality, demonstrated that UR and BN are effective individually, and integrating them can further improve the classification performance.

中文翻译:

针对分类问题优化 TSK 模糊系统:具有均匀正则化和批量归一化的小批量梯度下降

Takagi-Sugeno-Kang (TSK) 模糊系统是灵活且可解释的机器学习模型;然而,当数据量很大和/或数据维度很高时,它们可能不容易优化。本文提出了一种基于小批量梯度下降 (MBGD) 的算法来高效地训练 TSK 模糊分类器。它集成了两种新技术:第一,统一正则化(UR),它强制规则对输出具有相似的平均贡献,从而提高 TSK 分类器的泛化性能;其次是批量归一化(BN),它将 BN 从深度神经网络扩展到 TSK 模糊分类器,以加快收敛速度​​并提高泛化性能。来自不同应用领域的 12 个 UCI 数据集的实验,具有不同的大小和维度,
更新日期:2020-12-01
down
wechat
bug