Abstract:
Machine learning algorithms can be viewed as stochastic transformations that map training data to hypotheses. Following Bousquet and Elisseeff, we say that such an algori...Show MoreMetadata
Abstract:
Machine learning algorithms can be viewed as stochastic transformations that map training data to hypotheses. Following Bousquet and Elisseeff, we say that such an algorithm is stable if its output does not depend too much on any individual training example. Since stability is closely connected to generalization capabilities of learning algorithms, it is of theoretical and practical interest to obtain sharp quantitative estimates on the generalization bias of machine learning algorithms in terms of their stability properties. We propose several information-theoretic measures of algorithmic stability and use them to upper-bound the generalization bias of learning algorithms. Our framework is complementary to the information-theoretic methodology developed recently by Russo and Zou.
Published in: 2016 IEEE Information Theory Workshop (ITW)
Date of Conference: 11-14 September 2016
Date Added to IEEE Xplore: 27 October 2016
ISBN Information: