Skip to main content

Self-Organizing Decomposition of Functions

  • Conference paper
  • First Online:
Multiple Classifier Systems (MCS 2000)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1857))

Included in the following conference series:

Abstract

This paper discusses some of the issues raised by various approaches to decomposing functions and modular networks, and it offers a unified framework for multiple classifier (MC) systems in general. It argues that as yet there is no general approach to this problem although several approaches provide solutions to situations in which parametric labelling of a function allows the task facing classifying networks to be simplified. An MC connectionist system consisting of networks that process sub-spaces within a function based upon the similarity of patterns within its input domain is proposed and evaluated in the context of previous approaches to modular networks, and in the broader context of MC systems more generally. This simple automatic partitioning scheme is investigated using several different problems, and is shown to be effective. The degree to which the sub-spaces are specialized on a predictable subset of the overall function is assessed, and their performance is compared with equivalent single-network, and undivided multiversion systems. Statistical measures of ‘diversity’ previously used to assess voting MC systems are shown to apply to the measurement of the the degree of specialization or bias within groups of sub-space nets as well as provide a useful indicator across the range of MC systems. By successively increasing the overlap between sub-space partitions we show a transition from experts subnets, through voting version sets to optimal single classifiers. Finally, a unified framework for MC systems is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bauer, E. and Kohavi, R., 1998, An empirical comparison of voting classification algorithms: bagging, boosting, and variants, Machine Learning, vol. 13, pp. 1–38.

    Google Scholar 

  2. Bishop, C.M. 1995, Neural Networks for Pattern Recognition, Oxford University Press, Oxford.

    Google Scholar 

  3. Drucker, H., Schapire, R. and Simard, P., 1993, Improving performance in neural networks using a boosting algorithm, Advances in Neural Information Processing Systems 5, 42–49.

    Google Scholar 

  4. Drucker, H. et.al., 1994, Boosting and other ensemble methods, Neural Computation vol. 6, no. 6, pp. 1289–1301.

    Article  MATH  Google Scholar 

  5. Frey, P. and Slate, D., 1991, Letter recognition using holland-style adaptive classifiers. Machine Learning, vol. 6, pp. 161–182.

    Google Scholar 

  6. Geman, S., Bienenstock, E. and Doursat, R. 1992, Neural networks and the bias/variance dilemma. Neural Computation, vol. 4, no. 1, pp. 1–58.

    Article  Google Scholar 

  7. Jacobs, R. A., Jordan, M. I, Barto, A. G., 1991, Task decomposition through competition in a modular connectionist architecture: the what and where vision tasks, Cognitive Science, vol. 15, pp. 219–250.

    Article  Google Scholar 

  8. Jacobs, R. A., Jordan, M. I., Nowlan, S.J. & Hinton, G.E., 1991, Adaptive Mixtures of Local experts, Neural Computation, vol. 3, pp. 79–87.

    Article  Google Scholar 

  9. Jacobs, R. A., and Jordan, M. I., 1991, A Competitive Modular Connectionist Architecture, Advances in Neural Information Processing Systems 3, R. P. Lippman, J. Moody and D. S. Touretzky (Eds.), 767–773.

    Google Scholar 

  10. Jacobs, R. A., 1997, Bias/Variance analyses of mixtures-of-experts architectures Neural Computation, vol. 9, pp. 369–383.

    Article  MATH  MathSciNet  Google Scholar 

  11. Jordan, M. I., and Jacobs, R.A., 1994, Hierarchical Mixtures of experts and the EM Algorithm, Neural Computation, vol. 6, pp. 181–214.

    Article  Google Scholar 

  12. Kohonen, T., 1989, Self-organization and Associative Memory, Springer Verlag, Berlin.

    Google Scholar 

  13. Littlewood, B., and Miller, D. R., Conceptual modelling of coincident failures in multiversion software engineering, 1986, IEEE Trans. on Software Engineering, vol. 15, no. 12, pp. 1596–1614.

    Article  MathSciNet  Google Scholar 

  14. Nowlan, S. J., and Hinton, G. E., 1991, Evaluation of Adaptive Mixtures of Competing Experts, Advances in Neural Information Processing Systems 3, R. P. Lippman, J. Moody and D. S. Touretzky (Eds.), 774–780.

    Google Scholar 

  15. Partridge, D., 1996, Network Generalization Differences Quantified, Neural Networks, vol. 9, no. 2, pp. 263–271.

    Article  Google Scholar 

  16. Partridge, D., and Griffith, N., 1995, Strategies for Improving Neural Net Generalisation, Neural Computing & Applications, vol. 3, pp. 27–37.

    Article  MATH  Google Scholar 

  17. Partridge, D., and Krzanowski, W., 1997, Software Diversity: practical statistics for its measurement and exploitation, Information and Software Technology, vol. 39, pp. 707–717.

    Article  Google Scholar 

  18. Partridge, D., and Yates, W. B., 1996, Engineering Multiversion Neural-Net Systems, Neural Computation, vol. 8, no. 4, pp. 869–893.

    Article  Google Scholar 

  19. Partridge, D. and Yates, W. B., 1997, Data-defined Problems and Multiversion Neural-net Systems, Journal of Intelligent Systems, vol. 7, nos. 1–2, pp. 19–32.

    Google Scholar 

  20. Perrone, M. (Ed), 1993, Pulling it all together: methods for combining neural networks, ONR Tech. Rep. 69, Institute for Brain and Neural Systems, Brown University (http://www.mpp@brown.edu).

  21. Rumelhart, D.E and Hinton, G.E. and Williams, R.J., 1986, Learning internal representations by error propagation., In Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Vol. 1:Foundations, (Eds.) D.E Rumelhart and J.L. McClelland, MIT Press, Cambridge, MA:.

    Google Scholar 

  22. Schapire, R., 1990, The strength of weak learnability. Machine Learning, vol. 5, no. 2, pp. 197–227.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Griffith, N., Partridge, D. (2000). Self-Organizing Decomposition of Functions. In: Multiple Classifier Systems. MCS 2000. Lecture Notes in Computer Science, vol 1857. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45014-9_24

Download citation

  • DOI: https://doi.org/10.1007/3-540-45014-9_24

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67704-8

  • Online ISBN: 978-3-540-45014-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics