当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Convexifying Sparse Interpolation with Infinitely Wide Neural Networks: An Atomic Norm Approach
IEEE Signal Processing Letters ( IF 3.9 ) Pub Date : 2020-01-01 , DOI: 10.1109/lsp.2020.3039479
Akshay Kumar , Jarvis Haupt

This work examines the problem of exact data interpolation via sparse (neuron count), infinitely wide, single hidden layer neural networks with leaky rectified linear unit activations. Using the atomic norm framework of [Chandrasekaran et al. 2012], we derive simple characterizations of the convex hulls of the corresponding atomic sets for this problem under several different constraints on the weights and biases of the network, thus obtaining equivalent convex formulations for these problems. A modest extension of our proposed framework to a binary classification problem is also presented. We explore the efficacy of the resulting formulations experimentally, and compare with networks trained via gradient descent.

中文翻译:

用无限宽的神经网络凸化稀疏插值:一种原子范数方法

这项工作通过具有泄漏校正线性单元激活的稀疏(神经元计数)、无限宽、单隐藏层神经网络来检查精确数据插值的问题。使用 [Chandrasekaran 等人的原子规范框架。2012],我们在对网络权重和偏差的几种不同约束下,推导出该问题对应原子集的凸包的简单表征,从而获得这些问题的等效凸公式。还提出了我们提出的框架对二元分类问题的适度扩展。我们通过实验探索所得公式的功效,并与通过梯度下降训练的网络进行比较。
更新日期:2020-01-01
down
wechat
bug