当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Convolutional Neural Networks with Unitary Weights
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-23 , DOI: arxiv-2102.11855 Hao-Yuan ChangUniversity of California, Los Angeles, Kang L. WangUniversity of California, Los Angeles
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-23 , DOI: arxiv-2102.11855 Hao-Yuan ChangUniversity of California, Los Angeles, Kang L. WangUniversity of California, Los Angeles
While normalizations aim to fix the exploding and vanishing gradient problem
in deep neural networks, they have drawbacks in speed or accuracy because of
their dependency on the data set statistics. This work is a comprehensive study
of a novel method based on unitary synaptic weights derived from Lie Group to
construct intrinsically stable neural systems. Here we show that unitary
convolutional neural networks deliver up to 32% faster inference speeds while
maintaining competitive prediction accuracy. Unlike prior arts restricted to
square synaptic weights, we expand the unitary networks to weights of any size
and dimension.
中文翻译:
具有单位权重的深度卷积神经网络
虽然归一化旨在解决深度神经网络中的爆炸性梯度和消失性梯度问题,但由于它们依赖于数据集统计信息,因此它们在速度或准确性方面存在缺陷。这项工作是对一种新方法的全面研究,该方法基于Lie Group的单一突触权重来构建内在稳定的神经系统。在这里,我们证明了con积卷积神经网络在保持竞争性预测准确性的同时,提供了高达32%的更快推理速度。与限于方形突触权重的现有技术不同,我们将单一网络扩展为任何大小和尺寸的权重。
更新日期:2021-02-24
中文翻译:
具有单位权重的深度卷积神经网络
虽然归一化旨在解决深度神经网络中的爆炸性梯度和消失性梯度问题,但由于它们依赖于数据集统计信息,因此它们在速度或准确性方面存在缺陷。这项工作是对一种新方法的全面研究,该方法基于Lie Group的单一突触权重来构建内在稳定的神经系统。在这里,我们证明了con积卷积神经网络在保持竞争性预测准确性的同时,提供了高达32%的更快推理速度。与限于方形突触权重的现有技术不同,我们将单一网络扩展为任何大小和尺寸的权重。