Machine Learning Proceedings 1994

Machine Learning Proceedings 1994

Proceedings of the Eleventh International Conference, Rutgers University, New Brunswick, NJ, July 10–13, 1994
1994, Pages 352-360
Machine Learning Proceedings 1994

Selective Reformulation of Examples in Concept Learning

https://doi.org/10.1016/B978-1-55860-335-6.50050-7Get rights and content

Abstract

The fundamental tradeoff that is well known in Knowledge Representation and Reasoning affects Concept Learning from Examples too. Representation of learning examples using attribute-value has proved to support efficient inductive algorithms but limited expressiveness whereas more expressive representation languages, typically subsets of First Order Logic (FOL), are supported by less efficient algorithms. In fact, an underlying problem is that of the number of different ways of matching examples, just one in attribute-value representation and potentially large in FOL representation. This paper describes a novel approach to perform representation shifts on learning examples. The structure of these learning examples, initially represented using a subset of FOL-based languages, is reformulated so as to produce new learning examples that are represented using an attribute-value language. What is considered to be an adequate structure varies according to the learning task. We introduce the notion of morion (from the Greek) to qualify this structure and show, through a concrete example, the advantages it offers. We then describe an algorithm which reformulates learning examples automatically and go on to analyze its complexity. This approach to deductive reformulation is implemented in the REMO system that has been experimented on the learning of the construction of Chinese characters.

References (0)

Cited by (9)

View all citing articles on Scopus
View full text