当前位置: X-MOL 学术Comput. Chem. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep reinforcement learning control of hydraulic fracturing
Computers & Chemical Engineering ( IF 4.3 ) Pub Date : 2021-08-11 , DOI: 10.1016/j.compchemeng.2021.107489
Mohammed Saad Faizan Bangi 1, 2 , Joseph Sang-Il Kwon 1, 2
Affiliation  

Hydraulic fracturing is a technique to extract oil and gas from shale formations, and obtaining a uniform proppant concentration along the fracture is key to its productivity. Recently, various model predictive control schemes have been proposed to achieve this objective. But such controllers require an accurate and computationally efficient model which is difficult to obtain given the complexity of the process and uncertainties in the rock formation properties. In this article, we design a model-free data-based reinforcement learning controller which learns an optimal control policy through interactions with the process. Deep reinforcement learning (DRL) controller is based on the Deep Deterministic Policy Gradient algorithm that combines Deep-Q-network with actor-critic framework. Additionally, we utilize dimensionality reduction and transfer learning to quicken the learning process. We show that the controller learns an optimal policy to obtain uniform proppant concentration despite the complex nature of the process while satisfying various input constraints.



中文翻译:

水力压裂深度强化学习控制

水力压裂是一种从页岩地层中提取油气的技术,沿裂缝获得均匀的支撑剂浓度是其生产力的关键。最近,已经提出了各种模型预测控制方案来实现这一目标。但是,鉴于过程的复杂性和岩层特性的不确定性,此类控制器需要精确且计算效率高的模型,因此很难获得该模型。在本文中,我们设计了一个无模型的基于数据的强化学习控制器,它通过与过程的交互来学习最佳控制策略。深度强化学习 (DRL) 控制器基于深度确定性策略梯度算法,该算法将 Deep-Q 网络与 actor-critic 框架相结合。此外,我们利用降维和迁移学习来加快学习过程。我们表明,尽管过程的性质复杂,同时满足各种输入约束,控制器仍会学习最佳策略以获得均匀的支撑剂浓度。

更新日期:2021-08-23
down
wechat
bug