Skip to main content
Log in

Efficient Learning in Adaptive Processing of Data Structures

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Many researchers have explored the use of neural network models for the adaptive processing of data structures. The learning formulation for one of the models is known as the Backpropagation Through Structure (BPTS) algorithm. The main limitations of the BPTS algorithm are attributed to the problems of slow convergence speed and long-term dependency. In this Letter, a novel heuristic algorithm is proposed. The idea of this algorithm is to optimize the free parameters of the node representation in data structure by using a hybrid type of learning algorithm. Encouraging results achieved demonstrate that this proposed algorithm outperforms the BPTS algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Tsoi, A. C.: Adaptive processing of data structures: an expository overview and comments. Technical report, Faculty of Informatics, University of Wollongong, Australia, 1998.

    Google Scholar 

  2. Frasconi, P., Gori, M. and Sperduti, A.: A general framework for adaptive processing of data structures, IEEE Transactions on Neural Networks, 9 (1998), 768–785.

    Google Scholar 

  3. Sperduti, A. and Starita, A.: Supervised Neural Networks for Classification of Structures, IEEE Transactions on Neural Networks, 8(3) (1997), 714–735.

    Google Scholar 

  4. Goller, C. and Kuchler, A.: Learning task-dependent distributed representations by backpropagation through structure, in IEEE International Conference on Neural Networks, pp. 347–352, 1996.

  5. Rumelhart, D. E. and McClelland, J. L.: Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, 1986.

  6. Gori, M. and Tesi, A.: On the problem of local minima in backpropagation, IEEE Transactions on Pattern Analysis Machine Intelligence, 14(1) (1992), 76–86.

    Google Scholar 

  7. Bengio, Y., Simard, P. and Frasconi, P.: Learning Long-Term Dependencies with Gradient Descent is Difficult, IEEE Transactions on Neural Networks, 5(2) March (1994), 157–166.

    Google Scholar 

  8. Aarts, E. and Korst, J.: Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing, Wiley, New York, 1999.

    Google Scholar 

  9. Dodd, N.: Optimization of network structure using genetic techniques, International Joint Conference on Neural Networks, San Diego, 1990.

  10. Battiti, R. and Tecchiolli, G.: Training neural nets with the reactive tabu search, IEEE Transactions on Neural Networks, 6(5) (1995), 1185–1200.

    Google Scholar 

  11. Shang, Y. and Wah, B. W.: Global optimization for neural networks training, IEEE Computer, 29(3) (1996), 45–54.

    Google Scholar 

  12. Cho, S. Y. and Chow, T. W. S.: Training multi-layer neural networks using fast global learning algorithm - least squares and penalized optimization methods, Neurocomputing, 25 (1999), 115–131.

    Google Scholar 

  13. Wang, Z., Chi, Z., Feng, D. and Cho, S. Y.: Adaptive processing of tree-structure image representation, In: H. Y. Shum et al., (eds), Lecture Notes in Computer Science 2195: Advances in Multimedia Information Processing, pp. 989–995, 2002, Springer-Verlag.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cho, SY., Chi, Z., Wang, Z. et al. Efficient Learning in Adaptive Processing of Data Structures. Neural Processing Letters 17, 175–190 (2003). https://doi.org/10.1023/A:1023680629977

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1023680629977

Navigation