Loading [a11y]/accessibility-menu.js
Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk | IEEE Journals & Magazine | IEEE Xplore

Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk


Abstract:

We examine the theoretical properties of enforcing priors provided by generative deep neural networks via empirical risk minimization. In particular we consider two model...Show More

Abstract:

We examine the theoretical properties of enforcing priors provided by generative deep neural networks via empirical risk minimization. In particular we consider two models, one in which the task is to invert a generative neural network given access to its last layer and another in which the task is to invert a generative neural network given only compressive linear observations of its last layer. We establish that in both cases, in suitable regimes of network layer sizes and a randomness assumption on the network weights, that the non-convex objective function given by empirical risk minimization does not have any spurious stationary points. That is, we establish that with high probability, at any point away from small neighborhoods around two scalar multiples of the desired solution, there is a descent direction. Hence, there are no local minima, saddle points, or other stationary points outside these neighborhoods. These results constitute the first theoretical guarantees which establish the favorable global geometry of these non-convex optimization problems, and they bridge the gap between the empirical success of enforcing deep generative priors and a rigorous understanding of non-linear inverse problems.
Published in: IEEE Transactions on Information Theory ( Volume: 66, Issue: 1, January 2020)
Page(s): 401 - 418
Date of Publication: 15 August 2019

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.