当前位置: X-MOL 学术Comput. Mech. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hierarchical deep-learning neural networks: finite elements and beyond
Computational Mechanics ( IF 3.7 ) Pub Date : 2020-10-14 , DOI: 10.1007/s00466-020-01928-9
Lei Zhang , Lin Cheng , Hengyang Li , Jiaying Gao , Cheng Yu , Reno Domel , Yang Yang , Shaoqiang Tang , Wing Kam Liu

The hierarchical deep-learning neural network (HiDeNN) is systematically developed through the construction of structured deep neural networks (DNNs) in a hierarchical manner, and a special case of HiDeNN for representing Finite Element Method (or HiDeNN-FEM in short) is established. In HiDeNN-FEM, weights and biases are functions of the nodal positions, hence the training process in HiDeNN-FEM includes the optimization of the nodal coordinates. This is the spirit of r-adaptivity, and it increases both the local and global accuracy of the interpolants. By fixing the number of hidden layers and increasing the number of neurons by training the DNNs, rh-adaptivity can be achieved, which leads to further improvement of the accuracy for the solutions. The generalization of rational functions is achieved by the development of three fundamental building blocks of constructing deep hierarchical neural networks. The three building blocks are linear functions, multiplication, and inversion. With these building blocks, the class of deep learning interpolation functions are demonstrated for interpolation theories such as Lagrange polynomials, NURBS, isogeometric, reproducing kernel particle method, and others. In HiDeNN-FEM, enrichment functions through the multiplication of neurons is equivalent to the enrichment in standard finite element methods, that is, generalized, extended, and partition of unity finite element methods. Numerical examples performed by HiDeNN-FEM exhibit reduced approximation error compared with the standard FEM. Finally, an outlook for the generalized HiDeNN to high-order continuity for multiple dimensions and topology optimizations are illustrated through the hierarchy of the proposed DNNs.

中文翻译:

分层深度学习神经网络:有限元及其他

分层深度学习神经网络(HiDeNN)是通过分层构建结构化深度神经网络(DNN)系统开发的,并建立了表示有限元方法(或简称为HiDeNN-FEM)的HiDeNN特例. 在 HiDeNN-FEM 中,权重和偏差是节点位置的函数,因此 HiDeNN-FEM 中的训练过程包括节点坐标的优化。这就是 r-adaptivity 的精神,它提高了插值的局部和全局精度。通过固定隐藏层的数量并通过训练 DNN 增加神经元的数量,可以实现 rh-adaptivity,从而进一步提高解决方案的准确性。有理函数的泛化是通过开发构建深度层次神经网络的三个基本构建块来实现的。三个构建块是线性函数、乘法和求逆。使用这些构建块,可以为拉格朗日多项式、NURBS、等几何、再现核粒子方法等插值理论演示深度学习插值函数类。在HiDeNN-FEM中,通过神经元相乘的富集函数相当于标准有限元方法中的富集,即统一有限元方法的广义、扩展和划分。与标准 FEM 相比,由 HiDeNN-FEM 执行的数值示例表现出减少的近似误差。最后,
更新日期:2020-10-14
down
wechat
bug