Incremental learning with partial instance memory

https://doi.org/10.1016/j.artint.2003.04.001Get rights and content
Under an Elsevier user license
open archive

Abstract

Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. In earlier work, we selected extreme examples—those from the boundaries of induced concept descriptions—combined these with incoming instances, and used a batch learning algorithms to generate new concept descriptions. In this paper, we extend this work by combining our method for selecting extreme examples with two incremental learning algorithms, aq11 and gem. Using these new systems, aq11-pm and gem-pm, and using two real-world applications, those of computer intrusion detection and blasting cap detection in X-ray images, we conducted a lesion study to analyze the trade-offs between predictive accuracy, examples held in memory, learning time, and concept complexity. Empirical results showed that although the use of our partial-memory model did decrease predictive accuracy when compared to systems that learn from all available training data, it also decreased memory requirements, decreased learning time, and in some cases, decreased concept complexity. We also present results from an experiment using the stagger Concepts, a synthetic data set involving concept drift, suggesting that our methods perform comparably to the flora2 system in terms of predictive accuracy, but store fewer examples. Moreover, these outcomes are consistent with earlier results using our partial-memory model and batch learning.

Keywords

On-line concept learning
Incremental learning
Partial instance memory
Concept drift

Cited by (0)

1

Also: Institute of Computer Science, Polish Academy of Sciences, Warsaw.