当前位置: X-MOL 学术arXiv.cs.NA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Smart Gradient -- An Adaptive Technique for Improving Gradient Estimation
arXiv - CS - Numerical Analysis Pub Date : 2021-06-14 , DOI: arxiv-2106.07313
Esmail Abdul Fattah, Janet Van Niekerk, Haavard Rue

Computing the gradient of a function provides fundamental information about its behavior. This information is essential for several applications and algorithms across various fields. One common application that require gradients are optimization techniques such as stochastic gradient descent, Newton's method and trust region methods. However, these methods usually requires a numerical computation of the gradient at every iteration of the method which is prone to numerical errors. We propose a simple limited-memory technique for improving the accuracy of a numerically computed gradient in this gradient-based optimization framework by exploiting (1) a coordinate transformation of the gradient and (2) the history of previously taken descent directions. The method is verified empirically by extensive experimentation on both test functions and on real data applications. The proposed method is implemented in the R package smartGrad and in C++.

中文翻译:

智能梯度——一种改进梯度估计的自适应技术

计算函数的梯度可提供有关其行为的基本信息。此信息对于各个领域的多个应用程序和算法至关重要。一种需要梯度的常见应用是优化技术,例如随机梯度下降、牛顿方法和信任区域方法。然而,这些方法通常需要在方法的每次迭代中对梯度进行数值计算,这容易出现数值误差。我们提出了一种简单的有限内存技术,通过利用 (1) 梯度的坐标变换和 (2) 先前采用的下降方向的历史,在这个基于梯度的优化框架中提高数值计算梯度的准确性。该方法通过对测试功能和实际数据应用的广泛实验得到经验验证。所提出的方法在 R 包 smartGrad 和 C++ 中实现。
更新日期:2021-06-15
down
wechat
bug