Loading [a11y]/accessibility-menu.js
Performance limits of single-agent and multi-agent sub-gradient stochastic learning | IEEE Conference Publication | IEEE Xplore

Performance limits of single-agent and multi-agent sub-gradient stochastic learning


Abstract:

This work examines the performance of stochastic sub-gradient learning strategies, for both cases of stand-alone and networked agents, under weaker conditions than usuall...Show More

Abstract:

This work examines the performance of stochastic sub-gradient learning strategies, for both cases of stand-alone and networked agents, under weaker conditions than usually considered in the literature. It is shown that these conditions are automatically satisfied by several important cases of interest, including support-vector machines and sparsity-inducing learning solutions. The analysis establishes that sub-gradient strategies can attain exponential convergence rates, as opposed to sub-linear rates, and that they can approach the optimal solution within O(p), for sufficiently small step-sizes, p. A realizable exponential-weighting procedure is proposed to smooth the intermediate iterates and to guarantee these desirable performance properties.
Date of Conference: 20-25 March 2016
Date Added to IEEE Xplore: 19 May 2016
ISBN Information:
Electronic ISSN: 2379-190X
Conference Location: Shanghai, China

Contact IEEE to Subscribe

References

References is not available for this document.