Skip to main content

Feature Detection by Structural Enhanced Information

  • Conference paper
Advances in Neuro-Information Processing (ICONIP 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5507))

Included in the following conference series:

  • 1551 Accesses

Abstract

In this paper, we propose structural enhanced information for detecting main features in input patterns. In structural enhanced information, three types of enhanced information can be differentiated, that is, the first-, the second- and the third-order enhanced information. The first-order information is related to the enhancement of competitive units themselves through some elements in a network, and the second-order information is dependent upon the enhancement of competitive units with input patterns. Then, the third-order information is obtained by subtracting the effect of the first-order information from the second-order information. Thus, the third-order information more explicitly represents information on input patterns. With this structural enhanced information, we can estimate more detailed features in input patterns. We applied the method to the well-known Iris problem. In both problems, we succeeded in extracting detailed and important features especially by using the third-order information.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Mozer, M.C., Smolensky, P.: Using relevance to reduce network size automatically. Connection Science 1(1), 3–16 (1989)

    Article  Google Scholar 

  2. Karnin, E.D.: A simple procedure for pruning back-propagation trained networks. IEEE transactions on neural networks 1(2), 239–242 (1990)

    Article  Google Scholar 

  3. Reed, R.: Pruning algorithms-a survey. IEEE Transactions on Neural Networks 4(5) (1993)

    Google Scholar 

  4. DeSieno, D.: Adding a conscience to competitive learning. In: Proceedings of IEEE International Conference on Neural Networks, San Diego, pp. 117–124. IEEE, Los Alamitos (1988)

    Chapter  Google Scholar 

  5. Ahalt, S.C., Krishnamurthy, A.K., Chen, P., Melton, D.E.: Competitive learning algorithms for vector quantization. Neural Networks 3, 277–290 (1990)

    Article  Google Scholar 

  6. Xu, L.: Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Transaction on Neural Networks 4(4), 636–649 (1993)

    Article  Google Scholar 

  7. Luk, A., Lien, S.: Properties of the generalized lotto-type competitive learning. In: Proceedings of International conference on neural information processing, San Mateo, CA, pp. 1180–1185. Morgan Kaufmann Publishers, San Francisco (2000)

    Google Scholar 

  8. Hulle, M.M.V.: The formation of topographic maps that maximize the average mutual information of the output responses to noiseless input signals. Neural Computation 9(3), 595–606 (1997)

    Article  Google Scholar 

  9. Kaski, S., Nikkilä, J., Kohonen, T.: Methods for interpreting a self-organized map in data analysis. In: Verleysen, M. (ed.) Proceedings of ESANN 1998, 6th European Symposium on Artificial Neural Networks, Bruges, April 22–24, 1998, pp. 185–190. D-Facto, Brussels (1998)

    Google Scholar 

  10. Vesanto, J., Alhoniemi, E.: Clustering of the self-organizing map. IEEE-NN 11, 586 (2000)

    Google Scholar 

  11. Vesanto, J.: SOM-based data visualization methods. Intelligent-Data-Analysis 3, 111–126 (1999)

    Article  MATH  Google Scholar 

  12. Torkkola, K.: Feature extraction by non-parametric mutual information maximization. Journal of Machine Learning Research 3, 1415–1438 (2003)

    MathSciNet  MATH  Google Scholar 

  13. Slonim, N., Tishby, N.: Agglomerative information bottleneck (1999)

    Google Scholar 

  14. Linsker, R.: Self-organization in a perceptual network. Computer 21, 105–117 (1988)

    Article  Google Scholar 

  15. Kamimura, R., Kamimura, T., Uchida, O.: Flexible feature discovery and structural information. Connection Science 13(4), 323–347 (2001)

    Article  Google Scholar 

  16. Kamimura, R.: Information-theoretic competitive learning with inverse euclidean distance. Neural Processing Letters 18, 163–184 (2003)

    Article  Google Scholar 

  17. Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cognitive Science 9, 75–112 (1985)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kamimura, R. (2009). Feature Detection by Structural Enhanced Information. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5507. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03040-6_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-03040-6_21

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-03039-0

  • Online ISBN: 978-3-642-03040-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics