当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Expansion of Information in the Binary Autoencoder With Random Binary Weights
Neural Computation ( IF 2.9 ) Pub Date : 2021-10-12 , DOI: 10.1162/neco_a_01435
Viacheslav M Osaulenko 1
Affiliation  

This letter studies the expansion and preservation of information in a binary autoencoder where the hidden layer is larger than the input. Such expansion is widespread in biological neural networks, as in the olfactory system of a fruit fly or the projection of thalamic inputs to the neocortex. We analyze the threshold model, the kWTA model, and the binary matching pursuit model to find how the sparsity and the dimension of the encoding influence the input reconstruction, similarity preservation, and mutual information across layers. It is shown that the sparser activation of the hidden layer is preferable for preserving information between the input and the output layers. All three models show optimal similarity preservation at dense, not sparse, hidden layer activation. Furthermore, with a large enough hidden layer, it is possible to get zero reconstruction error for any input just by varying the thresholds of neurons. However, we show that the preference for sparsity is due to the noise in the weight matrix between layers. A fixed number of nonzero connections to every neuron achieves better information preservation and input reconstruction for the dense hidden layer activation. The theoretical results give useful insight into models of neural computation based on sparse binary representation and association memory.



中文翻译:

具有随机二进制权重的二进制自动编码器中的信息扩展

这封信研究了二进制自动编码器中信息的扩展和保存,其中隐藏层大于输入。这种扩展在生物神经网络中很普遍,例如在果蝇的嗅觉系统或丘脑输入到新皮层的投射中。我们分析了阈值模型、kWTA 模型和二元匹配追踪模型,以找出编码的稀疏性和维度如何影响输入重构、相似性保持和跨层互信息。结果表明,隐藏层的稀疏激活更适合保留输入层和输出层之间的信息。所有三个模型在密集而不是稀疏的隐藏层激活时都显示出最佳的相似性保留。此外,具有足够大的隐藏层,只需改变神经元的阈值,就可以为任何输入获得零重构误差。然而,我们表明对稀疏性的偏好是由于层之间权重矩阵中的噪声。每个神经元的固定数量的非零连接可以为密集隐藏层激活实现更好的信息保存和输入重建。理论结果为基于稀疏二进制表示和关联记忆的神经计算模型提供了有用的见解。

更新日期:2021-10-14
down
wechat
bug