Skip to main content

Relevance Metrics to Reduce Input Dimensions in Artificial Neural Networks

  • Conference paper
Artificial Neural Networks – ICANN 2007 (ICANN 2007)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4668))

Included in the following conference series:

Abstract

The reduction of input dimensionality is an important subject in modelling, knowledge discovery and data mining. Indeed, an appropriate combination of inputs is desirable in order to obtain better generalisation capabilities with the models. There are several approaches to perform input selection. In this work we will deal with techniques guided by measures of input relevance or input sensitivity. Six strategies to assess input relevance were tested over four benchmark datasets using a backward selection wrapper. The results show that a group of techniques produces input combinations with better generalisation capabilities even if the implemented wrapper does not compute any measure of generalisation performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bellman, R.: Adaptive control processes. A guided tour, Princeton University Press, New Jersey (1961)

    Google Scholar 

  2. Bishop, C.: Neural networks for pattern recognition. Oxford University Press, New York (1995)

    Google Scholar 

  3. Box, G., Jenkins, G.: Time Series Analysis, Forecasting and Control. Holden Day, San Francisco (1970)

    MATH  Google Scholar 

  4. Chambers, J.M., Cleveland, W.S., Kleiner, B., Tukey, P.A.: Graphical Methods for Data Analysis. Wadsworth & Brooks/Cole, 62 (1983)

    Google Scholar 

  5. Chapados, N., Bengio, Y.: Input decay: simple and effective soft variable selection. Neural Networks. In: Proc. IJCNN’01. Int. Joint Conference, vol. 2, pp. 1233–1237 (2001)

    Google Scholar 

  6. Cibas, T., Fogelman, F., Gallinari, P., Raudys, S.: Variable selection with optimal cell damage. In: ICANN’94. Int. Conf. on Artificial Neural Networks, Amsterdam (1994)

    Google Scholar 

  7. Forina, M.: PARVUS - An Extendible Package for Data Exploration, Classification and Correlation. Institute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno, 16147 Genoa, Italy (1991)

    Google Scholar 

  8. Garson, G.D.: Interpreting neural network connection weights. Artificial Intelligence Expert 6, 47–51 (1991)

    Google Scholar 

  9. Gevrey, M., Dimopoulos, I., Lek, S.: Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecological Modelling 160, 249–264 (2003)

    Article  Google Scholar 

  10. Gorman, R., Sejnowski, T.: Analysis of Hidden Units in a Layered Network Trained to Classify Sonar Targets. Neural Networks 1, 75–89 (1988)

    Article  Google Scholar 

  11. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. The Journal of Machine Learning Research 3, 1157–1182 (2003)

    Article  MATH  Google Scholar 

  12. Haykin, S., Neural Networks, A.: Comprehensive Foundation. Macmillan College Publishing, New York (1994)

    MATH  Google Scholar 

  13. Jang, J.-S.R.: Input selection for ANFIS learning. In: Proceedings of the Fifth IEEE International Conference on Fuzzy Systems, vol. 2, pp. 1493–1499. IEEE Computer Society Press, Los Alamitos (1996)

    Google Scholar 

  14. Kaur, K., Kaur, A., Malhotra, R.: Alternative Methods to Rank the Impact of Object Oriented Metrics in Fault Prediction Modeling using Neural Networks. Transactions on Engineering, Computing and Technology 13, 207–212 (2006)

    Google Scholar 

  15. Kohavi, R., John, G.: Wrappers for feature selection. Artificial Intelligence, 232–273 (1997)

    Google Scholar 

  16. Merz, C., Murphy, P.: UCI Repository of machine learning databases. Irvine, CA: University of California, Department of Information and Computer Science (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  17. Olden, J., Jackson, D.: Illuminating the “black box”: a randomization approach for understanding variable contributions in artificial neural networks. Ecological Modelling 154, 135–150 (2002)

    Article  Google Scholar 

  18. Olden, J., Joy, M., Death, R.: An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecological Modelling 178, 389–397 (2004)

    Article  Google Scholar 

  19. Schuschel, D., Hsu, C.: A Weight Analysis Based Wrapper Approach to Neural Nets Feature Subset Selection. IEEE Transactions on Systems, Man and Cybernetics 32, 207–221 (2002)

    Article  Google Scholar 

  20. Setiono, R., Liu, H.: Neural-network feature selector. IEEE transactions on neural networks 8, 654–662 (1997)

    Article  Google Scholar 

  21. Stracuzzi, D., Utgoff, P.: Randomized Variable Elimination. The Journal of Machine Learning Research 5, 1331–1362 (2004)

    Google Scholar 

  22. Sugeno, M., Yasukawa, T.: A Fuzzy-Logic-Based Approach to Qualitative Modelling. IEEE Transactions on Fuzzy Systems 1, 7–31 (1993)

    Article  Google Scholar 

  23. Witten, I.H., Frank, E.: Data Mining. Practical Machine Learning Tools and Techniques, 2nd edn. Elsevier, Morgan Kaufmann publishers (2005)

    Google Scholar 

  24. Zurada, J., Malinowski, A., Usui, S.: Perturbation method for deleting redundant inputs of perceptron networks. Neurocomputing 14, 177–193 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Joaquim Marques de Sá Luís A. Alexandre Włodzisław Duch Danilo Mandic

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Satizábal M., H.F., Pérez-Uribe, A. (2007). Relevance Metrics to Reduce Input Dimensions in Artificial Neural Networks. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds) Artificial Neural Networks – ICANN 2007. ICANN 2007. Lecture Notes in Computer Science, vol 4668. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74690-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74690-4_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74689-8

  • Online ISBN: 978-3-540-74690-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics