Elsevier

Journal of Complexity

Volume 25, Issue 2, April 2009, Pages 188-200
Journal of Complexity

Learning from uniformly ergodic Markov chains

https://doi.org/10.1016/j.jco.2009.01.001Get rights and content
Under an Elsevier user license
open archive

Abstract

Evaluation for generalization performance of learning algorithms has been the main thread of machine learning theoretical research. The previous bounds describing the generalization performance of the empirical risk minimization (ERM) algorithm are usually established based on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the generalization bounds of the ERM algorithm with uniformly ergodic Markov chain (u.e.M.c.) samples. We prove the bounds on the rate of uniform convergence/relative uniform convergence of the ERM algorithm with u.e.M.c. samples, and show that the ERM algorithm with u.e.M.c. samples is consistent. The established theory underlies application of ERM type of learning algorithms.

Keywords

ERM algorithms
Uniform ergodic Markov chain samples
Generalization bound
Uniform convergence
Relative uniform convergence

Cited by (0)

Supported by National 973 project (2007CB311002), NSFC key project (70501030) and Foundation of Hubei Educational Committee (Q200710001).