Skip to main content
Log in

Information Maximization in a Linear Manifold Topographic Map

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This article addresses the problem of unsupervised learning of multiple linear manifolds in a topology preserving neural map. The model finds simple linear estimations of the regions of the unknown data manifold. Each neuron of the map corresponds to a linear manifold whose basis and mean vectors and on- and off-manifold standard deviations must be learnt. The learning rules are derived based on competition between neurons and maximizing an approximation of the mutual information between the input and the output of each neuron. Neighborhood functions are also considered in the learning rules in order to develop the topology preserving property for the map. Considering two special density models for the input data, the optimal nonlinear input/output mappings of the neurons are found. Experimental results show a good performance for the proposed method on synthesized and practical problems compared with other relevant techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Kohonen T (1995) The adaptive subspace SOM (ASSOM) and its use for the implementation of invariant feature detection. In: Proceedings of ICANN’95 international conference on artificial neural networks, vol 1. Paris, pp 3–10

  2. Kohonen T (1996) Emergence of invariant-feature detectors in the adaptive subspace self-organizing map. Biol Cybern 75: 281–291. doi:10.1007/s004220050295

    Article  MATH  Google Scholar 

  3. Kohonen T (2001) Self-organizing maps, 3rd edn. Springer-Verlag, Berlin

    MATH  Google Scholar 

  4. Kohonen T, Kaski S, Lappalainen H (1997) Self-organized formation of various invariant-feature filters in the adaptive-subspace SOM. Neural Comput 9(6): 1321–1344. doi:10.1162/neco.1997.9.6.1321

    Article  Google Scholar 

  5. Ruiz-del-Solar J (1998) TEXSOM: texture segmentation using self-organizing maps. Neurocomputing 21: 7–18. doi:10.1016/S0925-2312(98)00041-1

    Article  Google Scholar 

  6. Ruiz-del-Solar J, Kottow D (1999) Adaptive-subspace growing cell structures (ASGCS)—a new self-organizing network for automatic selection of feature variables. Lecture Notes in Computer Science 1607 (IWANN 99). Springer, pp 805–813

  7. Tipping M, Bishop C (1999) Mixture of probabilistic principal component analyzers. Neural Comput 11(2): 443–482. doi:10.1162/089976699300016728

    Article  Google Scholar 

  8. Lopez-Rubio E, Ortiz-de-Lazcano-Lobato JM, Munoz-Perez J, Gomez-Ruiz JA (2004) Principal components analysis competitive learning. Neural Comput 16: 2459–2481. doi:10.1162/0899766041941880

    Article  MATH  Google Scholar 

  9. Kramer MA (1991) Nonlinear principal component analysis using autoassociative neural networks.. AIChE J 37(2): 233–243. doi:10.1002/aic.690370209

    Article  Google Scholar 

  10. Xu L (1993) Least mean square error reconstruction principle for self organization. Neural Netw 6: 627–648. doi:10.1016/S0893-6080(05)80107-8

    Article  Google Scholar 

  11. Schölkopf B, Smola A, Müller KR (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 10: 1299–1319. doi:10.1162/089976698300017467

    Article  Google Scholar 

  12. Malthouse EC (1998) Limitations of nonlinear PCA as performed with generic neural networks. IEEE Trans Neural Netw 9(1): 165–173. doi:10.1109/72.655038

    Article  Google Scholar 

  13. Mika S, Schölkopf B, Smola AJ, Müller K-R, Scholz M, Rätsch G (1999) Kernel PCA and de-noising in feature spaces. In: Kearns MS, Solla SA, Cohn DA (eds) Advances in neural information processing systems (vol 11). MIT Press, Cambridge, MA, pp 536–542

    Google Scholar 

  14. Kwok JT, Tsang IW (2004) The pre-image problem in kernel methods. IEEE Trans Neural Netw 15(6): 1517–1525. doi:10.1109/TNN.2004.837781

    Article  Google Scholar 

  15. Camastra F (2003) Data dimensionality estimation methods: a survey. Pattern Recognit 36: 2945–2954. doi:10.1016/S0031-3203(03)00176-6

    Article  MATH  Google Scholar 

  16. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290: 2319–2323. doi:10.1126/science.290.5500.2319

    Article  Google Scholar 

  17. Roweis S, Saul L (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290: 2323–2326. doi:10.1126/science.290.5500.2323

    Article  Google Scholar 

  18. Ham J, Lee DD, Mika S, Schölkopf B (2004) A kernel view of the dimensionality reduction of manifolds. In: Greiner R, Schuurmans D (eds) Proceedings of the twenty-first international conference on machine learning, pp 369–376

  19. Yeh MC, Lee IH, Wu G, Wu Y, Chang EY (2005) Manifold learning, a promised land or work in progress? IEEE international conference on multimedia and expo (ICME). The Netherlands, Amsterdam, pp 1154–1157

    Google Scholar 

  20. Liu Z (2002) Adaptive subspace self-organizing map and its applications in face recognition. Int J Image Graph 2(14): 519–540. doi:10.1142/S0219467802000834

    Article  Google Scholar 

  21. Zhang B, Fu M, Yan H, Jabri MA (1999) Handwritten digit recognition by adaptive-subspace self-organizing map (ASSOM). IEEE Trans Neural Netw 10(4): 939–945. doi:10.1109/72.774267

    Article  Google Scholar 

  22. Lopez-Rubio E, Munoz-Perez J, Gomez-Ruiz JA (2004) A principal components analysis self-organizing map. Neural Netw 17: 261–270. doi:10.1016/j.neunet.2003.04.001

    Article  MATH  Google Scholar 

  23. Van Hulle MM (1999) Faithful representations with topographic maps. Neural Netw 12: 803–823. doi:10.1016/S0893-6080(99)00041-6

    Article  Google Scholar 

  24. Haykin S (1999) Neural networks: a comprehensive foundation, 2nd edn. Prentice-Hall, Englewood Cliffs

    MATH  Google Scholar 

  25. Erdogmus D, Principe JC (2006) From linear adaptive filtering to nonlinear information processing. IEEE Signal Process Mag 23(6): 14–33. doi:10.1109/SP-M.2006.248709

    Article  MathSciNet  Google Scholar 

  26. Linsker R (1989) How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Comput 1: 402–411. doi:10.1162/neco.1989.1.3.402

    Article  Google Scholar 

  27. Bell AJ, Sejnowski TJ (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Comput 7: 1129–1159. doi:10.1162/neco.1995.7.6.1129

    Article  Google Scholar 

  28. Van Hulle MM (2002) Joint entropy maximization in kernel-based topographic maps. Neural Comput 14: 1887–1906. doi:10.1162/089976602760128054

    Article  MATH  Google Scholar 

  29. Adibi P, Safabakhsh R (2007) Joint entropy maximization in the kernel-based linear manifold topographic map. In: Proceedings of the international joint conference on neural networks (IJCNN 07). Orlando, Florida, pp 1133–1138

  30. Laughlin S (1981) A simple coding procedure enhances a neuron’s information capacity. Z Naturforsch [C] 36: 910–912

    Google Scholar 

  31. Papoulis A, Unnikrishna Pillai S (2002) Probability, random variables and stochastic processes, 4th edn. McGraw-Hill, New York

    Google Scholar 

  32. Bell AJ (2003) Independent component analysis. In: Arbib MA (eds) The hand book of brain theory and neural networks, 2nd edn. MIT Press, Massachusetts

    Google Scholar 

  33. Bottou L (2003) Stochastic learning. In: Bousquet O, von Luxburg U, Rätsch G (eds) Advanced lectures on machine learning, volume 3176 of Lecture Notes in Computer Science. Springer, pp 146–168

  34. Wang LX (1997) A course in Fuzzy systems and control. Prentice-Hall, Englewood Cliffs, NJ

    MATH  Google Scholar 

  35. Shah-Hosseini H, Safabakhsh R (2003) TASOM: a new time adaptive self-organizing map. IEEE Trans Syst Man Cybern B 33(2): 271–282. doi:10.1109/TSMCB.2003.810442

    Article  Google Scholar 

  36. Nabney IT NETLAB Library [online] (2006) Available: http://www.ncrg.aston.ac.uk/netlab/index.php. Accessed October 2006

  37. Standard Images [online] (2006) Available: http://en.wikipedia.org/wiki/Standard_test_image. Accessed October 2006

  38. Adibi P, Safabakhsh R (2009) Linear manifold topographic map formation based on an energy function with on-line adaptation rules. Neurocomputing 72(7–9): 1817–1825

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reza Safabakhsh.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Adibi, P., Safabakhsh, R. Information Maximization in a Linear Manifold Topographic Map. Neural Process Lett 29, 155–178 (2009). https://doi.org/10.1007/s11063-009-9101-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-009-9101-9

Keywords

Navigation