Skip to main content
Log in

A comparative study of neural-network feature weighting

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Many feature weighting methods have been proposed to evaluate feature saliencies in recent years. Neural-network (NN) feature weighting, as a supervised method, is founded upon the mapping from input features to output decisions, and implemented by evaluating the sensitivity of network outputs to its inputs. Through training on sample data, NN implicitly embodies the saliencies of input features. The partial derivatives of the outputs with respect to the inputs in the trained NN are calculated to measure their sensitivities to input features, which means that implicit feature weighting of the NN is transformed into explicit feature weighting. The purpose of this paper is to further probe into the principle of NN feature weighting, and evaluate its performance through a comparative study between NN feature weighting method and state-of-art weighting methods in the same working conditions. The motivation of this study is inspired by the lack of direct and comprehensive comparison studies of NN feature weighting method. Experiments in UCI repository data sets, face data sets and self-built data sets show that NN feature weighting method achieves superior performance in different conditions and has promising prospects. Compared with the other existing methods, NN feature weighting method can be used in more complex conditions, provided that NN can work in those conditions. As decision data, output data can be labels, reals or integers. Especially, feature weights can be calculated without the discretization of outputs in the condition of continuous outputs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Bartlett MS, Movellan JR, Sejnowski TJ (2002) Face recognition by independent component analysis. IEEE Trans Neural Netw Publ IEEE Neural Netw Counc 13(6):1450–1464

    Article  Google Scholar 

  • Battiti R (1994) Using mutual information for selecting features in supervised neural net learning. IEEE Trans Neural Netw 5(4):537–550

    Article  Google Scholar 

  • Belhumeur PN, Hespanha JP, and Kriegman DJ (1996) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. In: European conference on computer vision, Berlin, Heidelberg, pp 43–58

  • Cament LA, Castillo LE, Perez JP, Galdames FJ, Perez CA (2014) Fusion of local normalization and Gabor entropy weighted features for face identification. Pattern Recognit 47(2):568–577

    Article  Google Scholar 

  • Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Electr Eng 40(1):16–28

    Article  Google Scholar 

  • Cover T, Hart P (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27

    Article  MATH  Google Scholar 

  • Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Delchambre L (2014) Weighted principal component analysis: a weighted covariance eigendecomposition approach. Mon Not R Astron Soc 446(4):3545–3555

    Article  Google Scholar 

  • Diakoulaki D, Mavrotas G, Papayannakis L (1995) Determining objective weights in multiple criteria problems: the CRITIC method. Comput Oper Res 22(7):763–770

    Article  MATH  Google Scholar 

  • Dua D, Graff C (2019) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA. http://archive.ics.uci.edu/ml/datasets.html

  • Duan B, Pao Y-H (2006) Iterative feature weighting with neural networks. US Patents

  • Duda RO, Hart PE, Stork DG (2001) Pattern classification, 2nd edn. Wiley, NewYork

    MATH  Google Scholar 

  • Gilad-Bachrach R, Navot A, Tishby N (2004) Margin based feature selection-theory and algorithms. In: Proceedings of the twenty-first international conference on machine learning, pp 43–50

  • Hocke J, Martinetz T (2013) Feature weighting by maximum distance minimization. In: International conference on artificial neural networks, pp 420–425

  • Hocke J, Martinetz T (2015) Maximum distance minimization for feature weighting. Pattern Recogn Lett 52:48–52

    Article  Google Scholar 

  • Huang JZ, Ng MK, Rong H, Li Z (2005) Automated variable weighting in k-means type clustering. IEEE Trans Pattern Anal Mach Intell 27(5):657–668

    Article  Google Scholar 

  • Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Article  Google Scholar 

  • Jing L, Ng MK, Huang JZ (2007) An entropy weighting k-means algorithm for subspace clustering of high-dimensional sparse data. IEEE Trans Knowl Data Eng 19(8):1026–1041

    Article  Google Scholar 

  • Kaski S (1998) Dimensionality reduction by random mapping: fast similarity computation for clustering. In: Neural networks proceedings, 1998, The 1998 IEEE international joint conference on IEEE world congress on computational intelligence, vol 1, pp 413–418

  • Kira K, Rendell LA (1992) A practical approach to feature selection. Mach Learn Proc 1992:249–256

    Google Scholar 

  • Kononenko I, Šimec E, Robnik-Šikonja M (1997) Overcoming the myopia of inductive learning algorithms with RELIEFF. Appl Intell 7(1):39–55

    Article  Google Scholar 

  • Krizhevsky A, Sutskever I, Hinton GE (2012). Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  • Li M, Zhang T, Chen Y, Smola AJ (2014) Efficient mini-batch training for stochastic optimization. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining, pp 661–670

  • Mcculloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133

    Article  MathSciNet  MATH  Google Scholar 

  • Nie F, Xiang S, Jia Y, Zhang C, Yan S (2008) Trace ratio criterion for feature selection. Assoc Adv Artif Intell 2:671–676

    Google Scholar 

  • Nie F, Huang H, Cai X, Ding CH (2010) Efficient and robust feature selection via joint ℓ2, 1-norms minimization. In: Advances in neural information processing systems, pp 1813–1821

  • Peng L, Zhang H, Zhang H, Yang B (2017) A fast feature weighting algorithm of data gravitation classification. Inf Sci 375:54–78

    Article  Google Scholar 

  • Powell M (1990) The theory of RBF approximation in 1990, numerical analysis report. University of Cambridge, Cambridge

    Google Scholar 

  • Robnik-Šikonja M, Kononenko I (2003) Theoretical and empirical analysis of ReliefF and RReliefF. Mach Learn 53(1–2):23–69

    Article  MATH  Google Scholar 

  • Romero E, Sopena JM, Navarrete G, Alquézar R (2003) Feature selection forcing overtraining may help to improve performance. Proc Int Jt Conf Neural Netw 3:2181–2186

    Google Scholar 

  • Ruck DW, Rogers SK, Kabrisky M (1990) Feature selection using a multilayer perceptron. J Neural Netw Comput 2(2):40–48

    Google Scholar 

  • Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Parallel distributed processing: explorations in the microstructure of cognition, vol 1. California Univ San Diego La Jolla Inst for Cognitive Science, pp 318–362

  • Samaria FS, Harter AC (1994) Parameterisation of a stochastic model for human face identification. In: Proceedings of the second IEEE workshop on applications of computer vision, pp 138–142

  • Sarikaya R, Hinton GE, Deoras A (2014) Application of deep belief networks for natural language understanding. IEEE/ACM Trans Audio Speech Lang Process (TASLP) 22(4):778–784

    Article  Google Scholar 

  • Vidit J, Amitabha M (2015) The Indian face database. http://vis-www.cs.umass.edu/~vidit/IndianFaceDatabase/

  • Wei G-W (2011) Grey relational analysis method for 2-tuple linguistic multiple attribute group decision making with incomplete weight information. Expert Syst Appl 38(5):4824–4828

    Article  Google Scholar 

  • Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10(Feb):207–244

    MATH  Google Scholar 

  • Wold S, Esbensen K, Geladi P (1987) Principal component analysis. Chemometr Intell Lab Syst 2(1–3):37–52

    Article  Google Scholar 

  • Xia B, Bao C (2013). Speech enhancement with weighted denoising auto-encoder. In: Proceedings of INTERSPEECH, pp 3444–3448

  • Yan H, Yang J (2016) Sparse discriminative feature weights learning. Neurocomputing 173:1936–1942

    Article  Google Scholar 

  • Yingming W (1997) Using the method of maximizing deviation to make decision for multiindices. J Syst Eng Electr 8(3):21–26

    Google Scholar 

  • Yu Z-J, Hu X-P, Mao Q (2009) Novel credit rating method under electronic commerce. Control Decis 11(24):1668–1672

    Google Scholar 

  • Zhu L, Miao L, Zhang D (2012) Iterative Laplacian score for feature selection. CCPR 2012. Commun Comput Inf Sci 321:80–87

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and the anonymous reviewers for their valuable comments and constructive suggestions. This paper is jointly supported by the National Natural Science Foundation of China (No. 61672522), the National Natural Science Foundation and Shanxi Provincial People’s Government Jointly Funded Project of China for Coal Base and Low Carbon (No. U1510115) and the China Postdoctoral Science Foundation (No. 2016M601910).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tongfeng Sun.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, T., Ding, S., Li, P. et al. A comparative study of neural-network feature weighting. Artif Intell Rev 52, 469–493 (2019). https://doi.org/10.1007/s10462-019-09700-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-019-09700-z

Keywords

Navigation