Skip to main content

Adaptive noise injection for input variables relevance determination

  • Part III: Learning: Theory and Algorithms
  • Conference paper
  • First Online:
Book cover Artificial Neural Networks — ICANN'97 (ICANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1327))

Included in the following conference series:

Abstract

In this paper we consider the application of training with noise in multi-layer perceptron to input variables relevance determination. Noise injection is modified in order to penalize irrelevant features. The proposed algorithm is attractive as it requires the tuning of a single parameter. This parameter controls the penalization of the inputs together with the complexity of the model. After the presentation of the method, experimental evidences are given on simulated data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. Breiman, J.H. Friedman, R. Olshen, and C.J. Stone. Classification and Regression Trees. Wadswworth, Belmont, CA., 1984.

    Google Scholar 

  2. B. Efron and R.J. Tibshirani.An Introduction to the Bootstrap, volume 57 of Monographs on Statistics and Applied Probability. Chapman & Hall, New York, 1993.

    Google Scholar 

  3. J.H. Friedman. Flexible metric nearest neighbor classification. Technical report, Stanford University, Stanford, CA., November 1994.

    Google Scholar 

  4. Y. Grandvalet, S. Cann, and S. Boucheron. Noise injection: theoretical prospects. Neural Computation, 9(7), 1997, to appear.

    Google Scholar 

  5. T.J. Hastie and R.J. Tibshirani. Generalized Additive Models, volume 43 of Monographs on Statistics and Applied Probability. Chapman & Hall, New York, 1990.

    Google Scholar 

  6. L. Holmström and P. Koistinen. Using additive noise in back-propagation training. IEEE Transactions on Neural Networks, 3(1):24–38, January 1992.

    Google Scholar 

  7. T.K. Leen. From data distribution to regularization in invariant learning. Neural Computation, 7(5):974–981, 1995.

    Google Scholar 

  8. K. Matsuoka. Noise injection into inputs in back-propagation learning. IEEE Transactions on Systems, Man, and Cybernetics, 22(3):436–440, 1992.

    Google Scholar 

  9. R. M. Neal. Bayesian Learning for Neural Networks. Lecture Notes in Statistics. Springer-Verlag, New York, 1996.

    Google Scholar 

  10. A.R. Webb. Functional approximation by feed-forward networks: A least-squares approach to generalization. IEEE Transactions on Neural Networks, 5(3):363–371, 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wulfram Gerstner Alain Germond Martin Hasler Jean-Daniel Nicoud

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Grandvalet, Y., Canu, S. (1997). Adaptive noise injection for input variables relevance determination. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, JD. (eds) Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, vol 1327. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020198

Download citation

  • DOI: https://doi.org/10.1007/BFb0020198

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63631-1

  • Online ISBN: 978-3-540-69620-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics