Skip to main content

Geometrical Selection of Important Inputs with Feedforward Neural Networks

  • Conference paper
Artificial Neural Nets and Genetic Algorithms
  • 461 Accesses

Abstract

In this paper, we introduce a method that allows to evaluate efficiently the ‘importance’ of each coordinate of the input vector of a neural network. This measurement can be used to obtain information about the studied data. It can also be used to suppress irrelevant inputs in order to speed up the classification process conducted by the network.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. R. Battiti. Using Mutual Information for Selecting Features in Supervised Neural Net Learning. IEEE Trans. On Neural Networks, 5(4):537–550, July 1994.

    Article  Google Scholar 

  2. T. Cibas, F. Fogelman-Soulié, P. Gallinari, and S. Raudys. Variable selection with neural networks. Neurocomputing, 8(12):223–248, 1996.

    Article  Google Scholar 

  3. R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1973.

    MATH  Google Scholar 

  4. C. Gégout, B. Girau and F. Rossi. Generic Back-Propagation in Arbitrary Feedforward Neural Networks. In D.W. Pearson, N.C. Steele, and R.F. Albrecht, editors, Int. Conf. on Artificial Neural Networks and Genetic Algorithms, pages 168–171, Als, April 1995. Springer Verlag.

    Google Scholar 

  5. A.J. Miller. Subset Selection in Regression. Chapman and Hall, 1990.

    Google Scholar 

  6. W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes in C. Cambridge University Press, second edition, 1992.

    Google Scholar 

  7. K.L. Priddy, S.K. Rogers, D.W. Ruck, G.L. Tarr, and M. Kabrisky. Bayesian selection of important features for feedforward neural networks. Neurocomputing, 5:91–103, 1993.

    Article  Google Scholar 

  8. F. Rossi. Attribute suppression with multi-layer perceptron. In CESA Multiconference, Symposium on Robotics and Cybernetics, pages 542–547, Lille-France, July 1996. IMACS.

    Google Scholar 

  9. H. White. Learning in Artificial Neural Networks: A Statistical Perspective. Neural Computation, 1(4):425–464, 1989.

    Article  Google Scholar 

  10. Q. Zhang and A. Benveniste. Wavelet networks. IEEE Trans. On Neural Networks, 3(6):889–898, November 1992.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Wien

About this paper

Cite this paper

Rossi, F. (1998). Geometrical Selection of Important Inputs with Feedforward Neural Networks. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6492-1_118

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6492-1_118

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83087-1

  • Online ISBN: 978-3-7091-6492-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics