当前位置: X-MOL 学术Int. J. Gen. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning bipartite Bayesian networks under monotonicity restrictions
International Journal of General Systems ( IF 2.4 ) Pub Date : 2019-11-20 , DOI: 10.1080/03081079.2019.1692004
Martin Plajner 1, 2 , Jiří Vomlel 2
Affiliation  

ABSTRACT Learning parameters of a probabilistic model is a necessary step in machine learning tasks. We present a method to improve learning from small datasets by using monotonicity conditions. Monotonicity simplifies the learning and it is often required by users. We present an algorithm for Bayesian Networks parameter learning. The algorithm and monotonicity conditions are described, and it is shown that with the monotonicity conditions we can better fit underlying data. Our algorithm is tested on artificial and empiric datasets. We use different methods satisfying monotonicity conditions: the proposed gradient descent, isotonic regression EM, and non-linear optimization. We also provide results of unrestricted EM and gradient descent methods. Learned models are compared with respect to their ability to fit data in terms of log-likelihood and their fit of parameters of the generating model. Our proposed method outperforms other methods for small sets, and provides better or comparable results for larger sets.

中文翻译:

在单调性限制下学习二部贝叶斯网络

摘要 概率模型的学习参数是机器学习任务中的必要步骤。我们提出了一种通过使用单调性条件来改进小数据集学习的方法。单调性简化了学习,用户经常需要它。我们提出了一种贝叶斯网络参数学习算法。描述了算法和单调性条件,表明使用单调性条件我们可以更好地拟合基础数据。我们的算法在人工和经验数据集上进行了测试。我们使用满足单调性条件的不同方法:建议的梯度下降、等渗回归 EM 和非线性优化。我们还提供了不受限制的 EM 和梯度下降方法的结果。比较学习模型在对数似然和生成模型参数的拟合方面拟合数据的能力。我们提出的方法在小集合上优于其他方法,并为较大的集合提供更好或可比较的结果。
更新日期:2019-11-20
down
wechat
bug