当前位置: X-MOL 学术J. Cheminfom. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Tuning gradient boosting for imbalanced bioassay modelling with custom loss functions
Journal of Cheminformatics ( IF 8.6 ) Pub Date : 2022-11-10 , DOI: 10.1186/s13321-022-00657-w
Davide Boldini 1 , Lukas Friedrich 2 , Daniel Kuhn 2 , Stephan A Sieber 1
Affiliation  

While in the last years there has been a dramatic increase in the number of available bioassay datasets, many of them suffer from extremely imbalanced distribution between active and inactive compounds. Thus, there is an urgent need for novel approaches to tackle class imbalance in drug discovery. Inspired by recent advances in computer vision, we investigated a panel of alternative loss functions for imbalanced classification in the context of Gradient Boosting and benchmarked them on six datasets from public and proprietary sources, for a total of 42 tasks and 2 million compounds. Our findings show that with these modifications, we achieve statistically significant improvements over the conventional cross-entropy loss function on five out of six datasets. Furthermore, by employing these bespoke loss functions we are able to push Gradient Boosting to match or outperform a wide variety of previously reported classifiers and neural networks. We also investigate the impact of changing the loss function on training time and find that it increases convergence speed up to 8 times faster. As such, these results show that tuning the loss function for Gradient Boosting is a straightforward and computationally efficient method to achieve state-of-the-art performance on imbalanced bioassay datasets without compromising on interpretability and scalability.

中文翻译:

使用自定义损失函数为不平衡生物测定建模调整梯度提升

虽然在过去几年中可用的生物测定数据集的数量急剧增加,但其中许多都存在活性和非活性化合物之间极度不平衡的分布。因此,迫切需要新的方法来解决药物发现中的类别不平衡问题。受计算机视觉最新进展的启发,我们研究了梯度提升背景下用于不平衡分类的一组替代损失函数,并在来自公共和专有来源的六个数据集上对它们进行了基准测试,共计 42 项任务和 200 万种化合物。我们的研究结果表明,通过这些修改,我们在六个数据集中的五个数据集上实现了传统交叉熵损失函数的统计显着改进。此外,通过使用这些定制的损失函数,我们能够推动梯度提升来匹配或超越之前报告的各种分类器和神经网络。我们还研究了改变损失函数对训练时间的影响,发现它可以将收敛速度提高多达 8 倍。因此,这些结果表明,调整梯度提升的损失函数是一种直接且计算效率高的方法,可以在不平衡的生物测定数据集上实现最先进的性能,而不会影响可解释性和可扩展性。
更新日期:2022-11-11
down
wechat
bug