当前位置: X-MOL 学术IEEE/ACM Trans. Comput. Biol. Bioinform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LipGene: Lipschitz Continuity Guided Adaptive Learning Rates for Fast Convergence on Microarray Expression Data Sets
IEEE/ACM Transactions on Computational Biology and Bioinformatics ( IF 3.6 ) Pub Date : 2021-09-08 , DOI: 10.1109/tcbb.2021.3110516
Tejas Prashanth 1 , Snehanshu Saha 2 , Sumedh Basarkod 3 , Suraj Aralihalli 1 , Soma S. Dhavala 4 , Sriparna Saha 5 , Raviprasad Aduri 6
Affiliation  

Hyperparameter tuning, specifically tuning of learning rate, can often be a time-consuming process, especially when dealing with large data sets. A mathematical foundation in the choice of learning rate can minimize tuning efforts. We propose the application of a novel adaptive learning rate paradigm, guided by Lipschitz continuity of the loss functions (LipGene), to the task of Gene Expression Inference using shallow neural networks. We utilize Mean Absolute Error and Quantile loss separately for training. Our adaptive learning rate, which is dynamically computed for each epoch, is based on the principle of Lipschitz constant and requires no tuning. Experimentally, we prove that our proposed approach greatly surpasses conventional choices of learning rates in terms of both speed of convergence and generalizability. Advocating the principle of Parsimonious Computing, our method can reduce compute infrastructure required for training by using smaller networks with a minimal compromise on the prediction error.

中文翻译:


LipGene:Lipschitz 连续性引导的自适应学习率可实现微阵列表达数据集的快速收敛



超参数调整,特别是学习率的调整,通常是一个耗时的过程,尤其是在处理大型数据集时。选择学习率的数学基础可以最大限度地减少调整工作。我们提出了一种新颖的自适应学习率范式,以损失函数(LipGene)的 Lipschitz 连续性为指导,应用于使用浅层神经网络的基因表达推理任务。我们分别利用平均绝对误差和分位数损失进行训练。我们的自适应学习率是针对每个时期动态计算的,基于 Lipschitz 常数的原理,不需要调整。通过实验,我们证明我们提出的方法在收敛速度和泛化性方面都大大超过了传统的学习率选择。提倡简约计算的原则,我们的方法可以通过使用较小的网络来减少训练所需的计算基础设施,同时对预测误差做出最小的妥协。
更新日期:2021-09-08
down
wechat
bug