当前位置: X-MOL 学术Proc. Natl. Acad. Sci. U.S.A. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Overparameterized neural networks implement associative memory [Computer Sciences]
Proceedings of the National Academy of Sciences of the United States of America ( IF 11.1 ) Pub Date : 2020-11-03 , DOI: 10.1073/pnas.2005013117
Adityanarayanan Radhakrishnan 1, 2 , Mikhail Belkin 3 , Caroline Uhler 1, 2
Affiliation  

Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. We provide empirical evidence that 1) overparameterized autoencoders store training samples as attractors and thus iterating the learned map leads to sample recovery, and that 2) the same mechanism allows for encoding sequences of examples and serves as an even more efficient mechanism for memory than autoencoding. Theoretically, we prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.



中文翻译:

过度参数化的神经网络实现了关联记忆[计算机科学]

在机器学习和神经科学的交集中,确定用于记忆和检索数据的计算机制是一个长期存在的问题。我们的主要发现是,使用标准优化方法训练的标准超参数化深度神经网络为实值数据实现了这种机制。我们提供的经验证据是:1)超参数化的自动编码器将训练样本存储为吸引子,因此迭代所学习的图会导致样本恢复,并且2)相同的机制允许对示例序列进行编码,并且比自动编码更有效地用于存储。从理论上讲,我们证明在对单个示例进行训练时,自动编码器会将示例存储为吸引子。最后,通过将序列编码器视为地图的组成部分,

更新日期:2020-11-04
down
wechat
bug