当前位置: X-MOL 学术J. Comput. Phys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Direct shape optimization through deep reinforcement learning
Journal of Computational Physics ( IF 4.1 ) Pub Date : 2020-12-23 , DOI: 10.1016/j.jcp.2020.110080
Jonathan Viquerat , Jean Rabault , Alexander Kuhnle , Hassan Ghraieb , Aurélien Larcher , Elie Hachem

Deep Reinforcement Learning (DRL) has recently spread into a range of domains within physics and engineering, with multiple remarkable achievements. Still, much remains to be explored before the capabilities of these methods are well understood. In this paper, we present the first application of DRL to direct shape optimization. We show that, given adequate reward, an artificial neural network trained through DRL is able to generate optimal shapes on its own, without any prior knowledge and in a constrained time. While we choose here to apply this methodology to aerodynamics, the optimization process itself is agnostic to details of the use case, and thus our work paves the way to new generic shape optimization strategies both in fluid mechanics, and more generally in any domain where a relevant reward function can be defined.



中文翻译:

通过深度强化学习直接进行形状优化

深度强化学习(DRL)最近已扩展到物理和工程领域的多个领域,并取得了许多令人瞩目的成就。但是,在充分理解这些方法的功能之前,还有许多工作要做。在本文中,我们介绍了DRL在直接形状优化中的第一个应用。我们证明,只要获得足够的回报,通过DRL训练的人工神经网络就可以自行生成最佳形状,而无需任何先验知识,并且可以在有限的时间内完成。虽然我们选择将这种方法学应用于空气动力学,但优化过程本身与用例的细节无关,因此,我们的工作为流体力学乃至更广泛的领域中新的通用形状优化策略铺平了道路。可以定义相关的奖励功能。

更新日期:2021-01-04
down
wechat
bug