当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep transfer operator learning for partial differential equations under conditional shift
Nature Machine Intelligence ( IF 23.8 ) Pub Date : 2022-12-01 , DOI: 10.1038/s42256-022-00569-2
Somdatta Goswami , Katiana Kontolati , Michael D. Shields , George Em Karniadakis

Transfer learning enables the transfer of knowledge gained while learning to perform one task (source) to a related but different task (target), hence addressing the expense of data acquisition and labelling, potential computational power limitations and dataset distribution mismatches. We propose a new transfer learning framework for task-specific learning (functional regression in partial differential equations) under conditional shift based on the deep operator network (DeepONet). Task-specific operator learning is accomplished by fine-tuning task-specific layers of the target DeepONet using a hybrid loss function that allows for the matching of individual target samples while also preserving the global properties of the conditional distribution of the target data. Inspired by conditional embedding operator theory, we minimize the statistical distance between labelled target data and the surrogate prediction on unlabelled target data by embedding conditional distributions onto a reproducing kernel Hilbert space. We demonstrate the advantages of our approach for various transfer learning scenarios involving nonlinear partial differential equations under diverse conditions due to shifts in the geometric domain and model dynamics. Our transfer learning framework enables fast and efficient learning of heterogeneous tasks despite considerable differences between the source and target domains.



中文翻译:

条件偏移下偏微分方程的深度迁移算子学习

迁移学习可以将学习执行一项任务(源)时获得的知识迁移到相关但不同的任务(目标),从而解决数据采集和标记的费用、潜在的计算能力限制和数据集分布不匹配问题。我们提出了一种新的迁移学习框架,用于基于深度算子网络 (DeepONet) 的条件转移下的任务特定学习(偏微分方程中的函数回归)。任务特定的算子学习是通过使用混合损失函数微调目标 DeepONet 的任务特定层来完成的,该函数允许匹配单个目标样本,同时保留目标数据条件分布的全局属性。受条件嵌入算子理论的启发,我们通过将条件分布嵌入到再生核希尔伯特空间来最小化标记目标数据与未标记目标数据的代理预测之间的统计距离。由于几何域和模型动力学的变化,我们展示了我们的方法在各种条件下涉及非线性偏微分方程的各种迁移学习场景的优势。尽管源域和目标域之间存在相当大的差异,但我们的迁移学习框架能够快速高效地学习异构任务。由于几何域和模型动力学的变化,我们展示了我们的方法在各种条件下涉及非线性偏微分方程的各种迁移学习场景的优势。尽管源域和目标域之间存在相当大的差异,但我们的迁移学习框架能够快速高效地学习异构任务。由于几何域和模型动力学的变化,我们展示了我们的方法在各种条件下涉及非线性偏微分方程的各种迁移学习场景的优势。尽管源域和目标域之间存在相当大的差异,但我们的迁移学习框架能够快速高效地学习异构任务。

更新日期:2022-12-02
down
wechat
bug