Skip to main content
Log in

CDUL: Class directed unsupervised learning

  • Articles
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

A novel neural network called Class Directed Unsupervised Learning (CDUL) is introduced. The architecture, based on a Kohonen self-organising network, uses additional input nodes to feed class knowledge to the network during training, in order to optimise the final positioning of Kohonen nodes in feature space. The structure and training of CDUL networks is detailed, showing that (a) networks cannot suffer from the problem of single Kohonen nodes being trained by vectors of more than one class, (b) the number of Kohonen nodes necessary to represent the classes is found during training, and (c) the number of training set passes CDUL requires is low in comparison to similar networks. CDUL is subsequently applied to the classification of chemical excipients from Near Infrared (NIR) reflectance spectra, and its performance compared with three other unsupervised paradigms. The results thereby obtained demonstrate a superior performance which remains relatively constant through a wide range of network parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Skapura DM. Neural Networks: Algorithms, Applications & Programming Techniques. Addison-Wesley, Reading, MA, 1991

    Google Scholar 

  2. Beal R, Jackson T. Neural Computing. Adam Hilger, 1990

  3. Rumelhart DE, McClelland JL, editors. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Foundations, Vol. 1. MIT Press, 1986

  4. Xu L. Least Mean Square Error Reconstruction Principle for Self-Organising Neural-Nets. Neural Networks 1993; 6: 627–648

    Google Scholar 

  5. Masuda T. Model of Competitive Learning Based Upon a Generalised Energy Function. Neural Networks, 1993; 6: 1095–1103

    Google Scholar 

  6. Becker S. Unsupervised Learning Procedures for Neural Networks. International Journal of Neural Systems, 1991; 1 (2): 17–33

    Google Scholar 

  7. Hertz J, Krogh A, Palmer RG. Introduction to the Theory of Neural Computation. Addison-Wesley, Reading, MA, 1991

    Google Scholar 

  8. Kohonen T. Self-organisation and Associative Memory. Springer-Verlag, 3rd Edition, 1989

  9. Willshaw DJ, Von der Malsburg C. How Patterned Neural Connections Can Be Set Up by Self-organisation. Proc Roy Soc Lond B 1976; 194: 431–445

    Google Scholar 

  10. Kohonen T. Physiological Interpretation of the Self-Organising Map Algorithm. Neural Networks 1993; 6: 895–905

    Google Scholar 

  11. Hsieh KR, Chen WT. A Neural Network which Combines Unsupervised and Supervised Learning. IEEE Transactions on Neural Networks 1993; 4 (2): 357–360

    Google Scholar 

  12. Hecht-Nielsen, R. Counterpropagation Networks. Proc First IEEE Int Conf Neural Networks 1987; 2: 19–32

    Google Scholar 

  13. Hecht-Nielsen R. Neurocomputing. Addison-Wesley, Reading, MA, 1990.

    Google Scholar 

  14. Hecht-Nielsen R. Counterpropagation Networks. Applied Optics 1987; 26 (23): 4979–4984

    Google Scholar 

  15. Hecht-Nielsen R. Applications of Counterpropagation Networks. Neural Networks 1988; 1 (2): 131–139

    Google Scholar 

  16. Stork DG. Counterpropagation Networks: Adaptive Hierarchical Networks for Near-Optimal Mappings. Synapse Connection 1988; 1 (2): 9–11

    Google Scholar 

  17. Widrow B, Hoff M. Adaptive Switching Circuits. IRE WESCON Convention Record 1960; (4): 96–104

    Google Scholar 

  18. Mackenzie MD. Counterpropagation Networks Applied to the Classification of Alkanes Through Infrared Spectra. Neural Computing and Applications 1994; 2 (2): 111–116

    Google Scholar 

  19. NeuralWare. Neural Computing. NeuralWare Inc., 1991

  20. Kohonen T. Statistical Pattern Recognition with Neural Networks: Benchmark Studies. Proc Second Ann IEEE Int Conf Neural Networks, Vol 1, 1988

  21. Pal NR, Bezdek JC, Tsao ECK. Generalised Clustering Networks and Kohonen's Self-Organising Scheme. IEEE Trans Neural Networks 1993; 4 (4): 549–557

    Google Scholar 

  22. Stewart C, Lu YC, Larson V. A Neural Clustering Approach for High Resolution Radar Target Classification.Patt Recognition 1994; 27 (4): 503–513

    Google Scholar 

  23. Desieno D. Adding a Conscience to Competitive Learning. Proc Second IEEE Int Conf Neural Networks, Vol 1, 1988: 117–124

    Google Scholar 

  24. Huang SC, Huang YF. Bounds on the Number of Hidden Neurons in Multilayer Perceptrons. IEEE Trans Neural Networks 1991; 2 (1): 47–55

    Google Scholar 

  25. Kolmogorov AN. On the Representation of Continuous functions of Many Variables by Superposition of Continuous Functions of One Variable and Addition. Dokl Akad Nauk 1957; 14: 953–956

    Google Scholar 

  26. Ash T. Dynamic Node Creation in Backpropagation Networks. Connection Science, 1989; 1 (4): 365–375

    Google Scholar 

  27. Azimi-Sadjadi MR, Sheedvash S, Trujillo FO. Recursive Dynamic Node Creation in Multilayer Neural Networks. IEEE Trans Neural Networks 1993; 4 (2): 242–256

    Google Scholar 

  28. Fletcher P. A Self-Configuring Network. Connection Science 1991; 3 (1): 35–60

    Google Scholar 

  29. Fahlman SE. The Cascade Correlation Learning Architecture. School of Computer Science Report CMU-CS-90-100, 1990, Pittsburgh, PA Carnegie Mellon University

    Google Scholar 

  30. Fletcher P. Principles of Node Growth and Node Pruning. Connection Science 1992; 4 (2): 125–141

    Google Scholar 

  31. Mozer MC, Smolensky P. Using Relevance to Reduce Network Size Automatically. Connection Science 1989; 1 (1): 3–16

    Google Scholar 

  32. Bartlett EB. Dynamic Node Architecture Learning: An Information Theoretic Approach. Neural Networks 1994; 7 (1): 129–140

    Google Scholar 

  33. Goodhill GJ, Barrow HG. The Role of Weight Normalisation in Competitive Learning. Neural Computation 1993; 6: 255–260

    Google Scholar 

  34. Carrara E, Pagliari F, Nicolini C. Neural Networks for the Peak-Picking of Nuclear Magnetic Resonance Spectra. Neural Networks 1993; 6: 1023–1032

    Google Scholar 

  35. Curry B, Rumelhart D. A Neural Network which Classifies Mass Spectra. Tetrahedron Computer Methodology 1990; 3: 213–237

    Google Scholar 

  36. Munk ME, Madison MS, Robb EW. Neural Network Models for Infrared Spectrum Interpretation. Mikrochimica Acta 1991; 2 (1–6): 505–514

    Google Scholar 

  37. Robb EW, Munk ME. A Neural Network Approach to Infrared Spectrum Recognition. Mikrochimica Acta 1990; 1 (3–4): 131–155

    Google Scholar 

  38. Weigel UM, Herges R. Automatic Interpretation of Infrared Spectra: Recognition of Aromatic Substitution Patterns Using Neural Networks. J Chem Inf Sci 1992; 32: 723–731

    Google Scholar 

  39. Novic M, Zupan J. 2-D Mapping of Infrared Spectra using Kohonen Neural Network. Vestnik Slou Kem Drus 1992; 39: 195–212

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mackenzie, M.D. CDUL: Class directed unsupervised learning. Neural Comput & Applic 3, 2–16 (1995). https://doi.org/10.1007/BF01414172

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01414172

Keywords

Navigation