Abstract
To solve a complex problem, one of the effective general approaches is to decompose it into smaller, less complex and more manageable subproblems. In machine learning, this principle is a foundation for structured induction [44]: instead of learning a single complex classification rule from examples, define a concept hierarchy and learn rules for each of the (sub)concepts. Shapiro [44] used structured induction for the classification of a fairly complex chess endgame and demonstrated that the complexity and comprehensiveness (“brain-compatibility”) of the obtained solution was superior to the unstructured one. Shapiro was helped by a chess master to structure his problem domain. Typically, applications of structured induction involve a manual development of the hierarchy and a manual selection and classification of examples to induce the subconcept classification rules; usually this is a tiresome process that requires an active availability of a domain expert over long periods of time. Therefore, it would be very desirable to automate the problem decomposition task.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
R. L. Ashenhurst. The decomposition of switching functions. Technical report, Bell Laboratories BL-1(11), pages 541–602, 1952.
A. W. Biermann, J. Fairfield, and T. Beres. Signature table systems and learning. IEEE Transactions on Systems, Man and Cybernetics, 12(5):635–648, 1982.
M. Bohanec, I. Bratko, and V. Rajkovič. An expert system for decision making. In H. G. Sol, editor, Processes and Tools for Decision Support. North-Holland, 1983.
M. Bohanec, B. Cestnik, and V. Rajkovič. A management decision support system for allocating housing loans. In P. Humphreys, L. Bannon, A. McCosh, and P. Migliarese, editors, Implementing System for Supporting Management Decisions, pages 34–43. Chapman & Hall, London, 1996.
M. Bohanec and V. Rajkovič. Knowledge acquisition and explanation for multi-attribute decision making. In 8th Intl Workshop on Expert Systems and their Applications, pages 59–78, Avignon, France, 1988.
M. Bohanec and V. Rajkovič. DEX: An expert system shell for decision support. Sistemica, 1(1):145–157, 1990.
I. Bratko, I. Mozetič, and N. Lavrač. KARDIO: a study in deep and qualitative knowledge for expert systems. MIT Press, 1989.
N. H. Bshouty, T. R. Hancock, and L. Hellerstein. Learning boolean read-once formulas over generalized bases. Journal of Computer and System Sciences, 50(3):521–542, 1995.
T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, 1989.
H. A. Curtis. A New Approach to the Design of Switching Functions. Van Nostrand, Princeton, N.J., 1962.
J. Demšar, B. Zupan, M. Bohanec, and I. Bratko. Constructing intermediate concepts by decomposition of real functions. In M. van Someren and G. Widmer, editors, Proc. European Conference on Machine Learning, ECML-97, pages 93–107, Prague, April 1997. Springer.
J. Efstathiou and V. Rajkovič. Multiattribute decisionmaking using a fuzzy heuristic approach. IEEE Trans. on Systems, Man and Cybernetics, 9:326–333, 1979.
C. Files, R. Drechsler, and M. Perkowski. Functional decomposition of MVL functions using multi-valued decision diagrams. In International Symposium on Multi-Valued Logic, may 1997.
J. A. Goldman. Pattern theoretic knowledge discovery. In Proc. the Sixth Int’l IEEE Conference on Tools with AI, 1994.
T. R. Hancock, M. Golea, and M. Marchand. Learning nonoverlaping perceptron networks from examples and membership queries. Machine Learning, 16(3):161–183, 1994.
R. Kohavi. Bottom-up induction of oblivious read-once decision graphs. In F. Bergadano and L. de Raedt, editors, Proc. European Conference on Machine Learning, pages 154–169. Springer-Verlag, 1994.
I. Kononenko. Estimating attributes: Analysis and extensions of RELIEF. In F. Bergadano and L. de Raedt, editors, Proceedings of the European Conference on Machine Learning, pages 171–182. Springer-Verlag, 1994.
I. Kononenko, E. šimec, and M. Robnik šikonja. Overcoming the myopia of inductive learning algorithms with ReliefF. Applied Intelligence Journal, 7(1):39–56, 1997.
Y.-T. Lai, K.-R. R. Pan, and M. Pedram. OBDD-based function decomposition: Algorithms and implementation. IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems, 15(8):977–990, 1996.
Y.-T. Lai, M. Pedram, and S. Sastry. BDD-based decomposition of logic functions with application to FPGA synthesis. In 30th DAC, pages 642–647, 1993.
T. Luba. Decomposition of multiple-valued functions. In 25th Intl. Symposium on Multiple-Valued Logic, pages 256–261, Bloomigton, Indiana, May 1995.
T. Luba and H. Selvaraj. A general approach to boolean function decomposition and its application in FPGA-based synthesis. VLSI Design, 3(3-4):289–300, 1995.
R. S. Michalski. A theory and methodology of inductive learning. In R. Michalski, J. Carbonnel, and T. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, pages 83–134. Kaufmann, Paolo Alto, CA, 1983.
R. S. Michalski. Understanding the nature of learning: Issues and research directions. In R. Michalski, J. Carbonnel, and T. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, pages 3–25. Kaufmann, Los Atlos, CA, 1986.
D. Michie. Problem decomposition and the learning of skills. In N. Lavrač and S. Wrobel, editors, Machine Learning: ECML-95, Notes in Artificial Intelligence 912, pages 17–31. Springer-Verlag, 1995.
I. Mozetič. Learning of qualitative models. In I. Bratko and N. Lavrač, editors, Progress in Machine Learning. Sigma Press, 1987. Wilmslow, England.
I. Mozetič. The role of abstractions in learning of qualitative models. In Proc. Fourth Int. Workshop on Machine Learning. Morgan Kaufmann, 1987. Irvine, Ca.
S. Muggleton. Structuring knowledge by asking questions. In I. Bratko and N. Lavrač, editors, Progress in Machine Learning, pages 218–229. Sigma Press, 1987.
S. Muggleton. Inductive Acquisition of Expert Knowledge. Addison-Wesley, Workingham, England, 1990.
P. M. Murphy and D. W. Aha. UCI Repository of machine learning databases http://www.ics.uci.edu/~mlearn/mlrepository.html. Irvine, CA: University of California, Department of Information and Computer Science, 1994.
C. G. Nevill-Manning and I. H. Witten. Identifying hierarchical structure in sequences: A linear-time algorithm. Journal of Artificial Intelligence Research, 7:67–82, 1997.
M. Olave, V. Rajkovič, and M. Bohanec. An application for admission in public school systems. In I. Th. M. Snellen, W. B. H. J. van de Donk, and J.-P. Baquiast, editors, Expert Systems in Public Administration, pages 145–160. Elsevier Science Publishers (North Holland), 1989.
M. Perkowski and H. Uong. Automatic design of finite state machines with electronically programmable devices. In Record of Northcon’ 87, pages 16/4.1–16/4.15, Portland, OR, 1987.
B. Pfahringer. Controlling constructive induction in CiPF. In F. Bergadano and L. de Raedt, editors, Machine Learning: ECML-94, pages 242–256. Springer-Verlag, 1994.
J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, 1993.
R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986.
H. Ragavan and L. Rendell. Lookahead feature construction for learning hard concepts. In Proc. Tenth International Machine Learning Conference, pages 252–259. Morgan Kaufman, 1993.
V. Rajkovič and M. Bohanec. Decision support by knowledge explanation. In H. G. Sol and J. Vecsenyi, editors, Environments for supporting Decision Process. Elsevier Science Publishers B.V., 1991.
T. D. Ross, M. J. Noviskey, D. A. Gadd, and J. A. Goldman. Pattern theoretic feature extraction and constructive induction. In Proc. ML-COLT’ 94 Workshop on Constructive Induction and Change of Representation, New Brunswick, New Jersey, July 1994.
S. J. Russell. Tree-structured bias. In M. N. Saint Paul, editor, Proc. The Seventh National Conference on Artificial Intelligence, pages 641–645, San Mateo, CA, 1988. Morgan Kaufmann.
S. L. Salzberg. On comparing classifiers: Pitfalls to avoid and a recommended approach. Data Mining and Knowledge Discovery, 1:317–328, 1997.
A. Samuel. Some studies in machine learning using the game of checkers. IBM J. Res. Develop., 3:221–229, 1959.
A. Samuel. Some studies in machine learning using the game of checkers II: Recent progress. IBM J. Res. Develop., 11:601–617, 1967.
A. D. Shapiro. Structured induction in expert systems. Turing Institute Press in association with Addison-Wesley Publishing Company, 1987.
A. D. Shapiro and T. Niblett. Automatic induction of classificiation rules for a chess endgame. In M. R. B. Clarke, editor, Advances in Computer Chess 3, pages 73–92. Pergamon, Oxford, 1982.
I. Stahl. An overview of predicate invention techniques in ILP. In ESPRIT BRA 6020: Inductive Logic Programming, 1991.
P. Tadepalli and S. Russell. Learning from examples and membership queries with structured determinations. Machine Learning, 32:245–295, 1998.
S. B. Thrun and et al. tA performance comparison of different learning algorithms. Technical report, Carnegie Mellon University CMU-CS-91-197, 1991.
W. Wan and M. A. Perkowski. A new approach to the decomposition of incompletely specified functions based on graph-coloring and local transformations and its application to FPGA mapping. In Proc. of the IEEE EURO-DAC’ 92, pages 230–235, Hamburg, September 1992.
D. J. A. Welsh and M. B. Powell. An upper bound on the chromatic number of a graph and its application to timetabling problems. Computer Journal, 10:85–86, 1967.
B. Zupan. Machine learning based on function decomposition. PhD thesis, University of Ljubljana, April 1997. Available at http://www.ai.ijs.si/BlazZupan/papers.html.
B. Zupan, M. Bohanec, J. Demšar, and I. Bratko. Feature transformation by function decomposition. IEEE Intelligent Systems & Their Applications, 13(2):38–43, March/April 1998.
B. Zupan, M. Bohanec, J. Demšar, and I. Bratko. Learning by discovering concept hierarchies. Artificial Intelligence, 109(1-2):211–242, 1999.
B. Zupan, I. Bratko, M. Bohanec, and J. Demšar. Induction of concept hierarchies from noisy data. In P. Langley, editor, Proceedings of the Seventeenth International Conference on Machine Learning (ICML-2000), pages 1199–1206, San Francisco, CA, 2000. Morgan Kaufmann.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Zupan, B., Bratko, I., Bohanec, M., Demšar, J. (2001). Function Decomposition in Machine Learning. In: Paliouras, G., Karkaletsis, V., Spyropoulos, C.D. (eds) Machine Learning and Its Applications. ACAI 1999. Lecture Notes in Computer Science(), vol 2049. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44673-7_4
Download citation
DOI: https://doi.org/10.1007/3-540-44673-7_4
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42490-1
Online ISBN: 978-3-540-44673-6
eBook Packages: Springer Book Archive