当前位置: X-MOL 学术Adv. Comput. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Assessment of end-to-end and sequential data-driven learning for non-intrusive modeling of fluid flows
Advances in Computational Mathematics ( IF 1.7 ) Pub Date : 2020-06-16 , DOI: 10.1007/s10444-020-09753-7
Shivakanth Chary Puligilla , Balaji Jayaraman

In this work, we explore the advantages of end-to-end learning of multilayer maps offered by feedforward neural networks (FFNNs) for learning and predicting dynamics from transient flow data. While data-driven learning (and machine learning) in general depends on data quality and quantity relative to the underlying dynamics of the system, it is important for a given data-driven learning architecture to make the most of this available information. To this end, we focus on data-driven problems where there is a need to predict over reasonable time into the future with limited data availability. Such function time series prediction of full and reduced states is different from many applications of machine learning such as pattern recognition and parameter estimation that leverage large datasets. In this study, we interpret the suite of recently popular data-driven learning approaches that approximate the dynamics as Markov linear model in higher dimensional feature space as a multilayer architecture similar to neural networks. However, there exist a couple of key differences: (i) Markov linear models employ layer-wise learning in the sense of linear regression whereas neural networks represent end-to-end learning in the sense of nonlinear regression. We show through examples of data-driven modeling of canonical fluid flows that FFNN-like methods owe their success to leveraging the extended learning parameter space available in end-to-end learning without overfitting to the data. In this sense, the Markov linear models behave as shallow neural networks. (ii) The second major difference is that while the FFNN is by design a forward architecture, the class of Markov linear methods that approximate the Koopman operator is bi-directional, i.e., they incorporate both forward and backward maps in order to learn a linear map that can provide insight into spectral characteristics. In this study, we assess both reconstruction and predictive performance of temporally evolving dynamic using limited data for canonical nonlinear fluid flows including transient cylinder wake flow and the instability-driven dynamics of buoyant Boussinesq flow.

中文翻译:

评估端到端和顺序数据驱动的学习,以进行非侵入式流体流动建模

在这项工作中,我们探索了由前馈神经网络(FFNN)提供的多层地图的端到端学习在从瞬变流数据中学习和预测动力学的优势。虽然数据驱动的学习(和机器学习)通常取决于相对于系统底层动态的数据质量和数量,但对于给定的数据驱动的学习体系结构来说,充分利用这些可用信息非常重要。为此,我们关注于数据驱动的问题,这些问题需要在有限的数据可用性情况下,在合理的时间内进行预测。这种对完整状态和简化状态的函数时间序列的预测不同于机器学习的许多应用,例如利用大型数据集的模式识别和参数估计。在这个研究中,我们将一组最近流行的数据驱动学习方法解释为将动力学近似为高维特征空间中的马尔可夫线性模型的一种类似于神经网络的多层体系结构。但是,存在两个关键差异:(i)马尔可夫线性模型在线性回归的意义上采用分层学习,而神经网络在非线性回归的意义上代表了端到端学习。我们通过数据驱动的典范流体流建模示例说明,类似于FFNN的方法的成功归功于利用了端到端学习中可用的扩展学习参数空间而不会过度拟合数据。从这个意义上说,马尔可夫线性模型表现为浅层神经网络。(ii)第二个主要区别是,尽管FFNN的设计是一种前向架构,近似Koopman算子的Markov线性方法的类别是双向的,即,它们合并了前向和后向图,以便学习可以洞悉光谱特性的线性图。在这项研究中,我们使用有限的经典非线性流体流(包括瞬态圆柱尾流和浮力Boussinesq流的不稳定性驱动动力学)的有限数据来评估时间演化动力学的重构和预测性能。
更新日期:2020-06-16
down
wechat
bug