当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On Mean Absolute Error for Deep Neural Network Based Vector-to-Vector Regression
IEEE Signal Processing Letters ( IF 3.9 ) Pub Date : 2020-01-01 , DOI: 10.1109/lsp.2020.3016837
Jun Qi , Jun Du , Sabato Marco Siniscalchi , Xiaoli Ma , Chin-Hui Lee

In this paper, we exploit the properties of mean absolute error (MAE) as a loss function for the deep neural network (DNN) based vector-to-vector regression. The goal of this work is two-fold: (i) presenting performance bounds of MAE, and (ii) demonstrating new properties of MAE that make it more appropriate than mean squared error (MSE) as a loss function for DNN based vector-to-vector regression. First, we show that a generalized upper-bound for DNN-based vector-to-vector regression can be ensured by leveraging the known Lipschitz continuity property of MAE. Next, we derive a new generalized upper bound in the presence of additive noise. Finally, in contrast to conventional MSE commonly adopted to approximate Gaussian errors for regression, we show that MAE can be interpreted as an error modeled by Laplacian distribution. Speech enhancement experiments are conducted to corroborate our proposed theorems and validate the performance advantages of MAE over MSE for DNN based regression.

中文翻译:

基于深度神经网络的向量到向量回归的平均绝对误差

在本文中,我们利用平均绝对误差 (MAE) 的特性作为基于深度神经网络 (DNN) 的向量到向量回归的损失函数。这项工作的目标有两个:(i) 展示 MAE 的性能界限,以及 (ii) 展示 MAE 的新特性,使其比均方误差 (MSE) 更适合作为基于 DNN 的矢量到-向量回归。首先,我们表明可以通过利用 MAE 的已知 Lipschitz 连续性属性来确保基于 DNN 的向量到向量回归的广义上限。接下来,我们在存在加性噪声的情况下推导出一个新的广义上限。最后,与通常用于近似回归高斯误差的常规 MSE 相比,我们表明 MAE 可以解释为由拉普拉斯分布建模的误差。
更新日期:2020-01-01
down
wechat
bug