Moving least-square method in learning theory

https://doi.org/10.1016/j.jat.2009.12.002Get rights and content
Under an Elsevier user license
open archive

Abstract

Moving least-square (MLS) is an approximation method for data interpolation, numerical analysis and statistics. In this paper we consider the MLS method in learning theory for the regression problem. Essential differences between MLS and other common learning algorithms are pointed out: lack of a natural uniform bound for estimators and the pointwise definition. The sample error is estimated in terms of the weight function and the finite dimensional hypothesis space. The approximation error is dealt with for two special cases for which convergence rates for the total L2 error measuring the global approximation on the whole domain are provided.

Keywords

Learning theory
Moving least-square method
Sample error
Norming condition
Approximation error

Cited by (0)