当前位置: X-MOL 学术Geophysics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ML-descent: An optimization algorithm for full-waveform inversion using machine learning
Geophysics ( IF 3.0 ) Pub Date : 2020-10-21 , DOI: 10.1190/geo2019-0641.1
Bingbing Sun 1 , Tariq Alkhalifah 1
Affiliation  

Full-waveform inversion (FWI) is a nonlinear optimization problem, and a typical optimization algorithm such as the nonlinear conjugate gradient or limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) would iteratively update the model mainly along the gradient-descent direction of the misfit function or a slight modification of it. Based on the concept of meta-learning, rather than using a hand-designed optimization algorithm, we have trained the machine (represented by a neural network) to learn an optimization algorithm, entitled the “ML-descent,” and apply it in FWI. Using a recurrent neural network (RNN), we use the gradient of the misfit function as the input, and the hidden states in the RNN incorporate the history information of the gradient similar to an LBFGS algorithm. However, unlike the fixed form of the LBFGS algorithm, the machine-learning (ML) version evolves in response to the gradient. The loss function for training is formulated as a weighted summation of the L2 norm of the data residuals in the original inverse problem. As with any well-defined nonlinear inverse problem, the optimization can be locally approximated by a linear convex problem; thus, to accelerate the training, we train the neural network by minimizing randomly generated quadratic functions instead of performing time-consuming FWIs. To further improve the accuracy and robustness, we use a variational autoencoder that projects and represents the model in latent space. We use the Marmousi and the overthrust examples to demonstrate that the ML-descent method shows faster convergence and outperforms conventional optimization algorithms. The energy in the deeper part of the models can be recovered by the ML-descent even when the pseudoinverse of the Hessian is not incorporated in the FWI update.

中文翻译:

ML下降:使用机器学习进行全波形反演的优化算法

全波形反演(FWI)是一个非线性优化问题,典型的优化算法(例如非线性共轭梯度法或有限内存Broyden-Fletcher-Goldfarb-Shanno(LBFGS))将主要沿梯度下降方向迭代更新模型错配功能或对其稍加修改。基于元学习的概念,我们没有使用人工设计的优化算法,而是对机器(由神经网络表示)进行了训练,以学习名为“ ML下降”的优化算法,并将其应用于FWI 。使用递归神经网络(RNN),我们使用失配函数的梯度作为输入,并且RNN中的隐藏状态与LBFGS算法相似,并结合了梯度的历史信息。但是,与LBFGS算法的固定形式不同,机器学习(ML)版本根据梯度而发展。训练的损失函数公式为原始反问题中数据残差的L 2范数。与任何定义明确的非线性逆问题一样,优化可以通过线性凸问题局部近似。因此,为了加速训练,我们通过最小化随机生成的二次函数而不是执行费时的FWI来训练神经网络。为了进一步提高准确性和鲁棒性,我们使用了一种变分自动编码器,它可以在潜在空间中投影并表示模型。我们使用Marmousi和上覆推力示例来证明ML下降方法显示出更快的收敛性,并且优于传统的优化算法。即使在FWI更新中未包含Hessian的伪逆,也可以通过ML下降来恢复模型较深部分的能量。
更新日期:2020-10-27
down
wechat
bug