当前位置: X-MOL 学术IEEE Trans. Inform. Theory › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk
IEEE Transactions on Information Theory ( IF 2.2 ) Pub Date : 2020-01-01 , DOI: 10.1109/tit.2019.2935447
Paul Hand , Vladislav Voroninski

We examine the theoretical properties of enforcing priors provided by generative deep neural networks via empirical risk minimization. In particular we consider two models, one in which the task is to invert a generative neural network given access to its last layer and another in which the task is to invert a generative neural network given only compressive linear observations of its last layer. We establish that in both cases, in suitable regimes of network layer sizes and a randomness assumption on the network weights, that the non-convex objective function given by empirical risk minimization does not have any spurious stationary points. That is, we establish that with high probability, at any point away from small neighborhoods around two scalar multiples of the desired solution, there is a descent direction. Hence, there are no local minima, saddle points, or other stationary points outside these neighborhoods. These results constitute the first theoretical guarantees which establish the favorable global geometry of these non-convex optimization problems, and they bridge the gap between the empirical success of enforcing deep generative priors and a rigorous understanding of non-linear inverse problems.

中文翻译:

通过经验风险强制执行深度生成先验的全球保证

我们研究了通过经验风险最小化生成深度神经网络提供的执行先验的理论特性。特别地,我们考虑两种模型,其中一个任务是在访问最后一层的情况下反转生成神经网络,另一个任务是在仅给定最后一层压缩线性观察的情况下反转生成神经网络。我们确定,在这两种情况下,在适当的网络层大小和网络权重的随机性假设下,经验风险最小化给出的非凸目标函数没有任何虚假的驻点。也就是说,我们以高概率确定,在远离所需解的两个标量倍数周围的小邻域的任何点,都有一个下降方向。因此,没有局部最小值,鞍点或这些邻域之外的其他静止点。这些结果构成了建立这些非凸优化问题的有利全局几何形状的第一个理论保证,它们弥合了实施深度生成先验的经验成功与对非线性逆问题的严格理解之间的差距。
更新日期:2020-01-01
down
wechat
bug