当前位置: X-MOL 学术Int. J. Artif. Intell. Tools › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Analytical and Simple Form of Shrinkage Functions for Non-Convex Penalty Functions in Fused Lasso Algorithm
International Journal on Artificial Intelligence Tools ( IF 1.1 ) Pub Date : 2020-09-30 , DOI: 10.1142/s0218213020500207
Pichid Kittisuwan 1
Affiliation  

In some circumstances, the performance of machine learning (ML) tasks are based on the quality of signal (data) that is processed in these tasks. Therefore, the pre-processing techniques, such as reconstruction and denoising methods, are important techniques in ML tasks. In reconstructed (estimated) method, the fused lasso algorithm with non-convex penalty function is an efficient method when the signal corrupted by additive white Gaussian noise (AWGN) is considered. Therefore, this paper proposes new shrinkage functions for non-convex penalty functions, modified arctangent and exponential models, in fused lasso formulation. A lot of works present the shrinkage function for arctangent penalty function. Unfortunately, there is no closed-form solution. The numerical solution is required for shrinkage function of this penalty function. However, the analytical solution is derived in this paper. Moreover, the shrinkage function of modified exponential penalty function is proposed. This shrinkage function obtains from simple iterative method, fixed-point algorithm. We demonstrate the proposed methods through simulations with standard one-dimensional signals contaminated by AWGN. The proposed techniques are compared with traditional estimation methods, such as total variation (TV) and wavelet denoising methods. In experimental results, our proposed methods outperform several exiting methods both visual quality and in terms of root mean square error (RMSE). In fact, the proposed methods can better preserve the feature of noise-free signal than the compared methods. The denoised signals produced by the proposed methods are less smooth than the denoised signals produced by the compared methods.

中文翻译:

融合 Lasso 算法中非凸惩罚函数收缩函数的解析和简化形式

在某些情况下,机器学习 (ML) 任务的性能取决于在这些任务中处理的信号(数据)的质量。因此,预处理技术,如重建和去噪方法,是机器学习任务中的重要技术。在重构(估计)方法中,当考虑到被加性高斯白噪声(AWGN)破坏的信号时,具有非凸惩罚函数的融合lasso算法是一种有效的方法。因此,本文在融合套索公式中为非凸惩罚函数、修正反正切和指数模型提出了新的收缩函数。很多工作都提出了反正切惩罚函数的收缩函数。不幸的是,没有封闭形式的解决方案。该惩罚函数的收缩函数需要数值解。然而,本文推导出解析解。此外,提出了修正指数惩罚函数的收缩函数。这个收缩函数从简单的迭代方法,定点算法获得。我们通过模拟被 AWGN 污染的标准一维信号来演示所提出的方法。将所提出的技术与传统的估计方法进行了比较,例如总变差(TV)和小波去噪方法。在实验结果中,我们提出的方法在视觉质量和均方根误差 (RMSE) 方面都优于几种现有方法。事实上,与比较方法相比,所提出的方法可以更好地保留无噪声信号的特征。所提出的方法产生的去噪信号不如比较方法产生的去噪信号平滑。
更新日期:2020-09-30
down
wechat
bug