当前位置: X-MOL 学术J. Cheminfom. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Randomized SMILES strings improve the quality of molecular generative models
Journal of Cheminformatics ( IF 7.1 ) Pub Date : 2019-11-21 , DOI: 10.1186/s13321-019-0393-0
Josep Arús-Pous 1, 2 , Simon Viet Johansson 1 , Oleksii Prykhodko 1 , Esben Jannik Bjerrum 1 , Christian Tyrchan 3 , Jean-Louis Reymond 2 , Hongming Chen 1 , Ola Engkvist 1
Affiliation  

Recurrent Neural Networks (RNNs) trained with a set of molecules represented as unique (canonical) SMILES strings, have shown the capacity to create large chemical spaces of valid and meaningful structures. Herein we perform an extensive benchmark on models trained with subsets of GDB-13 of different sizes (1 million, 10,000 and 1000), with different SMILES variants (canonical, randomized and DeepSMILES), with two different recurrent cell types (LSTM and GRU) and with different hyperparameter combinations. To guide the benchmarks new metrics were developed that define how well a model has generalized the training set. The generated chemical space is evaluated with respect to its uniformity, closedness and completeness. Results show that models that use LSTM cells trained with 1 million randomized SMILES, a non-unique molecular string representation, are able to generalize to larger chemical spaces than the other approaches and they represent more accurately the target chemical space. Specifically, a model was trained with randomized SMILES that was able to generate almost all molecules from GDB-13 with a quasi-uniform probability. Models trained with smaller samples show an even bigger improvement when trained with randomized SMILES models. Additionally, models were trained on molecules obtained from ChEMBL and illustrate again that training with randomized SMILES lead to models having a better representation of the drug-like chemical space. Namely, the model trained with randomized SMILES was able to generate at least double the amount of unique molecules with the same distribution of properties comparing to one trained with canonical SMILES.

中文翻译:

随机 SMILES 字符串提高了分子生成模型的质量

使用一组表示为独特(规范)SMILES 字符串的分子进行训练的循环神经网络 (RNN) 已显示出创建有效且有意义的结构的大型化学空间的能力。在这里,我们对使用不同大小(100 万、10,000 和 1000)的 GDB-13 子集、不同的 SMILES 变体(规范、随机和 DeepSMILES)以及两种不同的循环单元类型(LSTM 和 GRU)进行训练的模型进行了广泛的基准测试并具有不同的超参数组合。为了指导基准测试,开发了新的指标来定义模型对训练集的概括程度。评估生成的化学空间的均匀性、封闭性和完整性。结果表明,使用经过 100 万个随机 SMILES(一种非唯一分子字符串表示形式)训练的 LSTM 单元的模型能够比其他方法泛化到更大的化学空间,并且更准确地表示目标化学空间。具体来说,使用随机 SMILES 训练模型,该模型能够以准均匀概率从 GDB-13 生成几乎所有分子。当使用随机 SMILES 模型进行训练时,使用较小样本训练的模型显示出更大的改进。此外,模型在从 ChEMBL 获得的分子上进行了训练,并再次说明使用随机 SMILES 进行训练可以使模型更好地表示类药物化学空间。也就是说,与使用规范 SMILES 训练的模型相比,使用随机 SMILES 训练的模型能够生成至少两倍数量的具有相同属性分布的独特分子。
更新日期:2019-11-21
down
wechat
bug