Loading [a11y]/accessibility-menu.js
Information-theoretic analysis of stability and bias of learning algorithms | IEEE Conference Publication | IEEE Xplore

Information-theoretic analysis of stability and bias of learning algorithms


Abstract:

Machine learning algorithms can be viewed as stochastic transformations that map training data to hypotheses. Following Bousquet and Elisseeff, we say that such an algori...Show More

Abstract:

Machine learning algorithms can be viewed as stochastic transformations that map training data to hypotheses. Following Bousquet and Elisseeff, we say that such an algorithm is stable if its output does not depend too much on any individual training example. Since stability is closely connected to generalization capabilities of learning algorithms, it is of theoretical and practical interest to obtain sharp quantitative estimates on the generalization bias of machine learning algorithms in terms of their stability properties. We propose several information-theoretic measures of algorithmic stability and use them to upper-bound the generalization bias of learning algorithms. Our framework is complementary to the information-theoretic methodology developed recently by Russo and Zou.
Date of Conference: 11-14 September 2016
Date Added to IEEE Xplore: 27 October 2016
ISBN Information:
Conference Location: Cambridge, UK

References

References is not available for this document.