Skip to main content
Log in

On the Internal Representations of Product Units

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper explores internal representation power of product units [1] that act as the functional nodes in the hidden layer of a multi-layer feedforward network. Interesting properties from using binary input provide an insight into the superior computational power of the product unit. Using binary computation problems of symmetry and parity as illustrative examples, we show that learning arbitrary complex internal representations is more achievable with product units than with traditional summing units.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Durbin and Rumelhart, D. E.: Product units: a computationally powerful and biologically plausible extension to backpropagation networks, Neural Computation 1 (1989), 133-142.

    Google Scholar 

  2. Giles, C. L., Miller, C. B., Chen, D., Chen, H. H., Sun, G. Z. and Lee, Y. C.: Learning and extracting finite state automata with second order recurrent neural networks, Neural Computation 4 (3), (1992), pp. 393-405.

    Google Scholar 

  3. Zeng, Z., Goodman, R. M. and Smyth, P.: Learning finite state machines with self-clustering recurrent networks, Neural Computation 5 (1993), 976-990.

    Google Scholar 

  4. Elman, J. L.: Distributed representations, simple recurrent networks, and grammatical structure, Machine Learning 7 (1991), 195-225.

    Google Scholar 

  5. Lin, C. T. and Lee, C. S.: Neural Fuzzy Systems, A Neuro-Fuzzy Synergismto Intelligent Systems, Prentice-Hall, 1996.

  6. Rumelhart, D. E. and McClelland, J. L.: ParallelDistributed Processing, Explorations in the Microstructure of Cognition, Vol.1, MIT Press, Cambridge, 1986.

    Google Scholar 

  7. Minsky, M. and Papert, S.: Perceptrons, MIT Press, Cambridge, 1969.

    Google Scholar 

  8. Chen and Bastani, F.: ANN with two-dendrite neurons and its weight initialization, Proc. of Internat. Joint Conf. on Neural Networks, 1992, Vol. 3, pp.139-146.

    Google Scholar 

  9. Chung, P. C. and Krile, T. F.: Reliability characteristics of quadratic Hebbian-type associative memories in optical and electronic network implementations, IEEE Trans. Neural Networks 6(2) (1995), 357-367.

    Google Scholar 

  10. Wolpert, S. and Micheli-Tzanakou, E.: A Neuromime in VLSI, IEEE Trans. Neural Networks 7(2) (1996), 300-306.

    Google Scholar 

  11. Baldi, P. and Venkatesh, S.: Random interactions in higher order neural networks, IEEE Trans. Inf. Theory 39 (1993), 274-283.

    Google Scholar 

  12. Wang, J. H. and Jeng, M. D.: Performance characterization of product unit neural network associative memories, Proc. of 9th Internat. Conf. CAD/CAM, Robotics and Factories of the Future, Newark, NJ, August 1993.

  13. Jang, J. S. R., Sun, C. T. and Mizutani, E.: Neuro-Fuzzy and Soft Computing, Prentice-Hall, 1997.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, JH., Yu, YW. & Tsai, JH. On the Internal Representations of Product Units. Neural Processing Letters 12, 247–254 (2000). https://doi.org/10.1023/A:1026534303563

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1026534303563

Navigation