当前位置: X-MOL 学术Comput. Chem. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A deep reinforcement learning approach for chemical production scheduling
Computers & Chemical Engineering ( IF 3.9 ) Pub Date : 2020-06-19 , DOI: 10.1016/j.compchemeng.2020.106982
Christian D. Hubbs , Can Li , Nikolaos V. Sahinidis , Ignacio E. Grossmann , John M. Wassick

This work examines applying deep reinforcement learning to a chemical production scheduling process to account for uncertainty and achieve online, dynamic scheduling, and benchmarks the results with a mixed-integer linear programming (MILP) model that schedules each time interval on a receding horizon basis. An industrial example is used as a case study for comparing the differing approaches. Results show that the reinforcement learning method outperforms the naive MILP approaches and is competitive with a shrinking horizon MILP approach in terms of profitability, inventory levels, and customer service. The speed and flexibility of the reinforcement learning system is promising for achieving real-time optimization of a scheduling system, but there is reason to pursue integration of data-driven deep reinforcement learning methods and model-based mathematical optimization approaches.



中文翻译:

用于化学品生产调度的深度强化学习方法

这项工作研究了将深度强化学习应用于化工生产调度过程中,以解决不确定性并实现在线,动态调度的问题,并使用混合整数线性规划(MILP)模型对结果进行基准测试,该模型以渐进的水平为基础调度每个时间间隔。一个工业示例被用作比较不同方法的案例研究。结果表明,强化学习方法优于单纯的MILP方法,并且在盈利能力,库存水平和客户服务方面与缩小的MILP方法相比具有竞争力。强化学习系统的速度和灵活性对于实现调度系统的实时优化很有希望,

更新日期:2020-06-28
down
wechat
bug