Skip to main content
Log in

What do Constructive Learners Really Learn?

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This may be necessary if the initial representation is inadequate or inappropriate. However, the distinction between constructive and non-constructive methods appears to be highly ambiguous. Several conventional definitions of the process of constructive induction appear to include all conceivable learning processes. In this paper I argue that the process of constructive learning should be identified with that of relational learning (i.e., I suggest that what constructive learners really learn is relationships) and I describe some of the possible benefits that might be obtained as a result of adopting this definition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aronis, J. & Provost, F. (1994). Efficiently Constructing Relational Features from Background Knowledge for Inductive Machine Learning. Proceedings of ML-COLT'94.

  • Baum, E. & Haussler, D. (1990). What Size Net Gives Valid Generalization? In Shavlik, J.W. & Dietterich, T. G. (eds.) Readings in Machine Learning, 258-262. San Mateo, California: Morgan Kaufmann.

    Google Scholar 

  • Blumer, A., Ehrenfeucht, A., Haussler, D. & Warmuth, M. (1987). Occam's Razor. Information Processing Letters 24: 377-380.

    Google Scholar 

  • Clark, P. & Boswell, R. (1991). Rule Induction with CN2: Some Recent Improvements. In Kodratoff, Y. (ed.) Proceedings of the Fifth European Working Session on Learning, No. 482 of Lecture Notes in Artificial Intelligence, 151-163. Springer-Verlag.

  • Clark, P. & Niblett, T. (1989). The CN2 Induction Algorithm. Machine Learning 3: 261-283.

    Google Scholar 

  • Dietterich, T. & Michalski, R. (1983). A Comparative Review of Selected Methods for Learning from Examples. In Michalski, R., Carbonell, J. & Mitchell, J. (eds.) Machine Learning: An Artificial Intelligence Approach. Palo Alto: Tioga.

    Google Scholar 

  • Fawcett, T. & Utgoff, P. (1991). A Hybrid Method for Feature Generation. Proceedings of the Eighth International Workshop on Machine Learning, 137-141. Evanston, IL.

  • Gold, E. (1967). Language Identification in the Limit. Information and Control 10: 447-474.

    Google Scholar 

  • Haussler, D. (1986). Quantifying the Inductive Bias in Concept Learning. UCSC-CRL-86-25, University of California at Santa Cruz.

  • Haussler, D. (1987). Bias, Version Spaces and Valiant's Learning Framework. Proceedings of the Fourth International Workshop on Machine Learning, 324-336. University of California, Irvine (June 22–25).

    Google Scholar 

  • Haussler, D. (1988). Quantifying Inductive Bias: AI Learning and Valiant's Learning Framework. Artificial Intelligence 36: 177-221.

    Google Scholar 

  • Haussler, D., Kearns, M. & Schapire, R. (1992). Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension. UCSC-CRL-91-44, University of California at Santa Cruz.

  • Japkowicz, N. & Hirsh, H. (1994). Towards a Bootstrapping Approach to Constructive Induction. Proceedings of ML-COLT'94.

  • Kearns, M. (1990). The Computational Complexity of Machine Learning. The MIT Press.

  • Kramer, S. (1994). CN2-MCI: A Two-Step Method for Constructive Induction. Proceedings of ML-COLT'94.

  • Matheus, C. (1990). Adding Domain Knowledge to SBL Through Feature Construction. Proceedings of the Eighth National Conference on Artificial Intelligence, 803-808. Boston, MA: MIT Press.

    Google Scholar 

  • Michalski, R. (1983). A Theory and Methodology of Inductive Learning. In Michalski R., Carbonell, J. & Mitchell, T. (eds.) Machine Learning: An Artificial Intelligence Approach. Palo Alto: Tioga.

    Google Scholar 

  • Michalski, R., Carbonell, J. & Mitchell, T. (eds.) (1983). Machine Learning: An Artificial Intelligence Approach. Palo Alto: Tioga.

    Google Scholar 

  • Michalski, R., Carbonell, J. & Mitchell, T. (eds.) (1986). Machine Learning: An Artificial Intelligence Approach, Vol II. Los Altos: Morgan Kaufmann.

    Google Scholar 

  • Mitchell, T. (1997). Machine Learning. McGraw-Hill.

  • Murphy, P. & Pazzani, M. (1991). ID2-of-3: Constructive Induction of M-of-N Concepts for Discriminators in Decision Trees. Proceedings of the Eighth International Workshop on Machine Learning (ML91). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Oliveira, A. & Sangiovanni-Vincentelli, A. (1992). Constructive Induction Using a Non-Greedy Strategy for Feature Selection. In Sleeman, D. & Edwards, P. (eds.) Proceedings of the Ninth International Workshop on Machine Learning (ML92), 355-360. San Mateo, California: Morgan Kaufmann Publishers.

    Google Scholar 

  • Pagallo, G. (1989). Learning DNF by Decision Trees. Proceedings of the Eleventh Joint Conference on Artificial Intelligence, 639-644. Morgan Kaufmann.

  • Pfahringer, B. (1994). Cipf 2.0: A Robust Constructive Induction System. Proceedings of ML-COLT'94.

  • Quinlan, J. (1993). C4.5: Programs for Machine Learning. San Mateo, California: Morgan Kaufmann.

    Google Scholar 

  • Rumelhart, D., Hinton, G. & Williams, R. (1986). Learning Representations by Back-Propagating Errors. Nature 323: 533-536.

    Google Scholar 

  • Sazonov, V. & Wnek, J. (1994). A Hypothesis-Driven Constructive Induction Approach to Expanding Neural Networks. Proceedings of ML-COLT'94.

  • Seshu, R. (1989). Solving the Parity Problem. University of Illinois at Urbana-Champaign, Inductive Learning Group.

  • Spackman, K. (1988). Learning Categorical Decision Criteria in Biochemical Domains. Proceedings of the Fifth International Conference on Machine Learning, 36-46. San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Stone, J. & Thornton, C. (1995). Can Artificial Neural Networks Discover Useful Regularities? Proceedings of ICANN-95. Cambridge.

  • Valiant, L. (1984). A Theory of the Learnable. Communications of the ACM 27: 1134-1142.

    Google Scholar 

  • Valiant, L. (1995). Learning Disjunctions of Conjunctions. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, 560-566. Los Altos: Morgan Kaufmann.

    Google Scholar 

  • Vapnik, V. & Chervonenkis, A. (1971). On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities. Theor. Probab. Appl. 16(2): 264-280.

    Google Scholar 

  • Wnek, J. & Michalski, R. (1994). Hypothesis-Driven Constructive Induction in AQ17-HCI: A Method and Experiments. Machine Learning 14, 139. Boston: Kluwer Academic Publishers.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Thornton, C. What do Constructive Learners Really Learn?. Artificial Intelligence Review 13, 249–257 (1999). https://doi.org/10.1023/A:1006577209231

Download citation

  • Issue date:

  • DOI: https://doi.org/10.1023/A:1006577209231