当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A new perspective for Minimal Learning Machines: A lightweight approach
Neurocomputing ( IF 6 ) Pub Date : 2020-08-01 , DOI: 10.1016/j.neucom.2020.03.088
José A.V. Florêncio , Saulo A.F. Oliveira , João P.P. Gomes , Ajalmar R. Rocha Neto

Abstract This paper introduces a new procedure to train Minimal Learning Machines (MLM) for regression tasks. Besides that, we propose a new prediction process in MLM. A well-known drawback concerning the (original) MLM model formulation is the lack of sparseness.The most recent efforts on this problem strongly rely on the selection of reference points before training and prediction steps in MLM, all based on some supposition regarding the data. In the opposite direction, here, we explore another formulation of MLM in which we do not rely on any assumption regarding the data for prior selection. Instead, our proposal, named Lightweight Minimal Learning Machine (LW-MLM), builds a regularized system that imposes sparseness. We thrive in such a sparse criterion, not by selection but instead using a piece of weighted information into the model. We validate the contributions of this paper through four types of experiments to evaluate different aspects of our proposal: the prediction error performance, the goodness-of-fit of estimated vs. measured values, the norm values which are related to the sparsity, and finally, the prediction error in high dimensional settings. Based on the results, we show that LW-MLM is a valid alternative since achieved similar or higher accuracy rates against other variants being all seen as statistically equivalent.

中文翻译:

最小学习机的新视角:轻量级方法

摘要 本文介绍了一种为回归任务训练最小学习机 (MLM) 的新程序。除此之外,我们在 MLM 中提出了一个新的预测过程。关于(原始)MLM 模型公式的一个众所周知的缺点是缺乏稀疏性。 最近在这个问题上的努力强烈依赖于在 MLM 中训练和预测步骤之前选择参考点,所有这些都基于对数据的一些假设. 在相反的方向,在这里,我们探索另一种 MLM 公式,其中我们不依赖任何关于先验选择数据的假设。相反,我们的提议名为轻量级最小学习机 (LW-MLM),它构建了一个施加稀疏性的正则化系统。我们在这样一个稀疏的标准中茁壮成长,不是通过选择而是在模型中使用一条加权信息。我们通过四种类型的实验来验证本文的贡献,以评估我们提案的不同方面:预测误差性能、估计值与测量值的拟合优度、与稀疏性相关的范数值,以及最后,高维设置中的预测误差。基于结果,我们表明 LW-MLM 是一个有效的替代方案,因为与其他被视为统计等效的变体相比,它实现了相似或更高的准确率。
更新日期:2020-08-01
down
wechat
bug