当前位置: X-MOL 学术Nat. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.
Nature Communications ( IF 14.7 ) Pub Date : 2018-06-19 , DOI: 10.1038/s41467-018-04316-3
Decebal Constantin Mocanu , Elena Mocanu , Peter Stone , Phuong H. Nguyen , Madeleine Gibescu , Antonio Liotta

Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

中文翻译:

受网络科学启发的具有自适应稀疏连接性的人工神经网络的可扩展培训。

通过在各个领域的深度学习成功,人工神经网络目前是最常用的人工智能方法之一。从生物神经网络的网络特性(例如稀疏性,无标度)中获得启发,我们认为(与一般实践相反)人工神经网络也不应具有完全连接的层。在这里,我们提出了人工神经网络的稀疏进化训练,一种在学习过程中将两个连续神经元层的初始稀疏拓扑(Erdős-Rényi随机图)演化为无标度拓扑的算法。我们的方法在训练前用稀疏层替换了人工神经网络完全连接的层,从而二次减少了参数的数量,而准确性没有降低。我们证明了我们对受限玻尔兹曼机的主张,多层感知器和卷积神经网络,可在15个数据集上进行无监督和有监督的学习。我们的方法有可能使人工神经网络扩展到超出当前可能的范围。
更新日期:2018-06-19
down
wechat
bug