当前位置: X-MOL 学术ACS Cent. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimizing Chemical Reactions with Deep Reinforcement Learning
ACS Central Science ( IF 18.2 ) Pub Date : 2017-12-15 00:00:00 , DOI: 10.1021/acscentsci.7b00492
Zhenpeng Zhou 1 , Xiaocheng Li 2 , Richard N. Zare 1
Affiliation  

Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability.

中文翻译:

通过深度强化学习优化化学反应

采用深度强化学习来优化化学反应。我们的模型反复记录化学反应的结果,并选择新的实验条件以改善反应结果。通过在仿真和实际反应上减少71%的步骤,该模型优于最新的黑盒优化算法。此外,我们通过从某些概率分布中提取反应条件引入了一种有效的探索策略,与确定性策略相比,将遗憾从0.062改善到0.039。结合有效的探索策略和加速的微滴反应,在30分钟内确定了所考虑的四个反应的最佳反应条件,并更好地了解了控制微滴反应的因素。
更新日期:2017-12-15
down
wechat
bug