Skip to main content

A Novel Generalized Congruence Neural Networks

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3496))

Abstract

All existing architectures and learning algorithms for Generalized Congruence Neural Network (GCNN) seem to have some shortages or lack rigorous theoretical foundation. In this paper, a novel GCNN architecture (BPGCNN) is proposed. A new error back-propagation learning algorithm is also developed for the BPGCNN. Experimental results on some benchmark problems show that the proposed BPGCNN performs better than standard sigmoidal BPNN and some improved versions of BPNN in convergence speed and learning capability, and can overcome the drawbacks of other existing GCNNs.

This paper is supported by the Program for New Century Excellent Talents in University, the National Natural Science Foundation of China under Grant No.60373111, the Science and Technology Research Program of the Municipal Education Committee of Chongqing of China under Grant No.040505, and the Key Lab of Computer Network and Communication Technology of Chongqing of China.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Duch, W., Jankowski, N.: Survey of Neural Transfer Functions. Neural Computing Surveys 2, 163–212 (1999)

    Google Scholar 

  2. Chandra, P., Singh, Y.: A Case for the Self-adaptation of Activation Functions in FFANNs. Neurocomputing 56, 447–454 (2004)

    Article  Google Scholar 

  3. Dorffner, G.: A Unified Framework for MLPs and RBFNs: Introducing Conic Section Function Networks. Cybernetics and Systems 25, 511–554 (1994)

    Article  MathSciNet  Google Scholar 

  4. Giraud, B.G., Lapedes, A., Liu, L.C., Lemm, J.C.: Lorentzian Neural Nets. Neural Networks 8, 757–767 (1995)

    Article  Google Scholar 

  5. Jin, F.: Architectures and Algorithms of Generalized Gongruence Neural Networks. Journal of Southwest Jiaotong University 6 (1998)

    Google Scholar 

  6. Jin, F.: Study on Principles and Algorithms of Generalized Congruence Neural Networks. In: International Conference on Neural Network and Brain, pp. 441–444. Publishing House of Electronics Industry, Beijing (1998)

    Google Scholar 

  7. Hu, F., Jin, F.: Analysis of Characteristics of Generalized Congruence Neural Networks with an Improved Algorithm. Journal of Southwest Jiaotong University 36, 136–139 (2001)

    Google Scholar 

  8. Ng, S.C., Leung, S.H., Luk, A.: Fast Convergent Generalized Back-propagation Algorithm with Constant Learning Rate. Neural Processing Letters 9, 13–23 (1999)

    Article  Google Scholar 

  9. Baum, E., Lang, K.: Constructing Hidden Units Using Examples and Queries. In: Lippmann, R., Moody, J., Touretzky, D. (eds.) Advances in Neural Information Processing Systems, vol. 3, pp. 904–910. Morgan Kaufmann, San Mateo (1991)

    Google Scholar 

  10. Shang, Y., Wah, B.W.: Global optimization for Neural Network Training. IEEE Computer 29, 45–54 (1996)

    Google Scholar 

  11. Cho, S., Chow, T.W.S.: Training Multilayer Neural Networks Using Fast Global Learning Algorithm-least-squares and Penalized Optimization Methods. Neurocomputing 25, 115–131 (1999)

    Article  MATH  Google Scholar 

  12. Plagianakos, V.P., Sotiropoulos, D.G., Vrahatis, M.N.: An Improved Backpropagation Method with Adaptive Learning Rate. Department of Mathematics, University of Patras, Technical Report No.98-02 (1998)

    Google Scholar 

  13. Vogl, T.P., Mangis, J.K., Rigler, J.K., Zink, W.T., Alkon, D.L.: Accelerating the Convergence of the Back-propagation Method. Biological Cybernetics 59, 257–263 (1988)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Chen, Y., Wang, G., Jin, F., Yan, T. (2005). A Novel Generalized Congruence Neural Networks. In: Wang, J., Liao, X., Yi, Z. (eds) Advances in Neural Networks – ISNN 2005. ISNN 2005. Lecture Notes in Computer Science, vol 3496. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11427391_72

Download citation

  • DOI: https://doi.org/10.1007/11427391_72

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25912-1

  • Online ISBN: 978-3-540-32065-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics