当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Power Law in Deep Neural Networks: Sparse Network Generation and Continual Learning With Preferential Attachment
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 11-7-2022 , DOI: 10.1109/tnnls.2022.3217403
Fan Feng 1 , Lu Hou 2 , Qi She 3 , Rosa H. M. Chan 1 , James T. Kwok 2
Affiliation  

Training deep neural networks (DNNs) typically requires massive computational power. Existing DNNs exhibit low time and storage efficiency due to the high degree of redundancy. In contrast to most existing DNNs, biological and social networks with vast numbers of connections are highly efficient and exhibit scale-free properties indicative of the power law distribution, which can be originated by preferential attachment in growing networks. In this work, we ask whether the topology of the best performing DNNs shows the power law similar to biological and social networks and how to use the power law topology to construct well-performing and compact DNNs. We first find that the connectivities of sparse DNNs can be modeled by truncated power law distribution, which is one of the variations of the power law. The comparison of different DNNs reveals that the best performing networks correlated highly with the power law distribution. We further model the preferential attachment in DNNs evolution and find that continual learning in networks with growth in tasks correlates with the process of preferential attachment. These identified power law dynamics in DNNs can lead to the construction of highly accurate and compact DNNs based on preferential attachment. Inspired by the discovered findings, two novel applications have been proposed, including evolving optimal DNNs in sparse network generation and continual learning tasks with efficient network growth using power law dynamics. Experimental results indicate that the proposed applications can speed up training, save storage, and learn with fewer samples than other well-established baselines. Our demonstration of preferential attachment and power law in well-performing DNNs offers insight into designing and constructing more efficient deep learning.

中文翻译:


深度神经网络中的幂律:稀疏网络生成和优先附着的持续学习



训练深度神经网络 (DNN) 通常需要大量的计算能力。由于高度冗余,现有的 DNN 表现出较低的时间和存储效率。与大多数现有的 DNN 相比,具有大量连接的生物和社交网络非常高效,并且表现出指示幂律分布的无标度属性,这可以通过不断增长的网络中的优先附着来起源。在这项工作中,我们询问性能最佳的 DNN 的拓扑是否表现出类似于生物和社交网络的幂律,以及如何使用幂律拓扑来构建性能良好且紧凑的 DNN。我们首先发现稀疏 DNN 的连通性可以通过截断幂律分布来建模,这是幂律的变体之一。不同 DNN 的比较表明,性能最佳的网络与幂律分布高度相关。我们进一步对 DNN 进化中的优先依恋进行建模,发现随着任务的增长,网络中的持续学习与优先依恋的过程相关。这些在 DNN 中确定的幂律动力学可以构建基于优先附着的高精度和紧凑的 DNN。受这些发现的启发,人们提出了两种新颖的应用,包括在稀疏网络生成中进化最优 DNN,以及使用幂律动力学实现高效网络增长的持续学习任务。实验结果表明,与其他成熟的基线相比,所提出的应用程序可以加快训练速度、节省存储空间并使用更少的样本进行学习。 我们在性能良好的 DNN 中展示了优先依恋和幂律,为设计和构建更高效的深度学习提供了见解。
更新日期:2024-08-26
down
wechat
bug