Skip to main content
Log in

Feature detectors by autoencoders: Decomposition of input patterns into atomic features by neural networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this paper, we propose a feature detector for the neural network. Our feature detector aims to decompose input patterns into minimum constituents or atomic features. Atomic features are classified into features, common to all the input patterns and features, specific to each pattern. Thus, our feature detector is mainly composed of a common feature detector, distinctive feature detectors. The other two components are an information maximizer and an error minimizer. The distinctive feature detector is realized by the information maximizer, which increases the information, specific to each pattern as much as possible. The error minimizer is a device to minimize the difference between targets and outputs, that is, a usual neural network. We applied our feature detector to two problems: detection of vertical and horizontal bars and the phonological feature detection. In both cases, experimental results confirmed that distinctive features could clearly be extracted and that the common feature detector could extract features, as close as possible to the common features.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R. P. Gorman, T. J. Sejnowski. Analysis of hidden units in a layered network trained to classify sonar targets,Neural Networks, vol.1, pp.75–89, 1988.

    Article  Google Scholar 

  2. Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel. Backprogagation applied to handwritten Zip code recognition,Neural Computation, vol. 1, pp. 541–551, 1989.

    Google Scholar 

  3. D. E. Rumelhart, G. E. Hinton, R. J. Williams. Learning internal representation by error propagation, inParallel Distributed Processing, D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, Cambridge, Massachusetts: the MIT Press, vol. 1, pp. 318–362, 1986.

    Google Scholar 

  4. M. Ishikawa, Y. Yamamoto. Discovery of rules and generalization by structural learning of neural networks,The Brain and Neural Networks, vol. 1, no. 2, pp. 57–63, 1994.

    Google Scholar 

  5. G. Deco, W. Finnof, H. G. Zimmermann. Elimination of overtraining by a mutual information network, inProceeding of the International Conference on Artificial Neural Networks, Springer-Verlag, pp. 744–749, 1993.

  6. R. Kamimura and S. Nakanishi.Kernel hidden unit analysis, IEICE Transaction on Information and Systems, vol. E78-D, no. 4, pp. 484–489, 1995.

    Google Scholar 

  7. M. C. Mozer, P. Smolensky. Using relevance to reduce network size automatically,Connection Science, vol. 1, no. 1, pp. 3–16, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kamimura, R., Nakanishi, S. Feature detectors by autoencoders: Decomposition of input patterns into atomic features by neural networks. Neural Process Lett 2, 17–22 (1995). https://doi.org/10.1007/BF02309011

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02309011

Keywords

Navigation