Skip to main content
Log in

Efficient Large Margin-Based Feature Extraction

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

As known, the supervised feature extraction aims to search a discriminative low dimensional space where the new samples in the sample class cluster tightly and the samples in the different classes keep away from each other. For most of algorithms, how to push these samples located in class margin or in other class (called hard samples in this paper) towards the class is difficult during the transformation. Frequently, these hard samples affect the performance of most of methods. Therefore, for an efficient method, to deal with these hard samples is very important. However, fewer methods in the past few years have been specially proposed to solve the problem of hard samples. In this study, the large margin nearest neighbor (LMNN) and weighted local modularity (WLM) in complex network are introduced respectively to deal with these hard samples in order to push them towards the class quickly and the samples with the same labels as a whole shrink into the class, which both result in small within-class distance and large margin between classes. Combined WLM with LMNN, a novel feature extraction method named WLMLMNN is proposed, which takes into account both the global and local consistencies of input data in the projected space. Comparative experiments with other popular methods on various real-world data sets demonstrate the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Engel D, Hüttenberger L, Hamann B (2012) A survey of dimension reduction methods for high-dimensional data analysis and visualization. Vis large unstructured data sets appl geospatial planning, Model Eng—Proc IRTG 1131 Work. 2011. I: 135–149

  2. Yang YM, Pedersen JO (1997) A comparative study on feature selection in text categorization. Int Conf Mach Learn (ICML) 97:412–420

    Google Scholar 

  3. Van Der Maaten LJP, Postma EO, Van Den Herik HJ (2009) Dimensionality reduction: a comparative review. J Mach Learn Res 10:1–41

    Google Scholar 

  4. Cui Y, Fan LY (2012) Feature extraction using fuzzy maximum margin criterion. Neurocomputing 86:52–58

    Article  Google Scholar 

  5. Ding J, Wen CY, Li GQ, Chua CS (2016) Locality sensitive batch feature extraction for high-dimensinal data. Neucocomputing 171:664–672

    Article  Google Scholar 

  6. Li B, Du J, Zhang XP (2016) Feature extraction using maximum nonparametric margin projection. Neurocomputing 188:225–232

    Article  Google Scholar 

  7. Ghassabeh YA, Rudzicz F, Moghaddam HA (2015) Fast incremental LDA feature extraction. Pattern Recognit 48:1999–2012

    Article  Google Scholar 

  8. Huang P, Chen C, Tang ZM, Yang ZJ (2014) Feature extraction using local structure preserving discriminat analysis. Neurocomputing 140:104–113

    Article  Google Scholar 

  9. Huang P, Li T, Gao GW (2018) Collaborative representation based local discriminant projection for feature extraction. Digit Signal Process 76:84–93

    Article  MathSciNet  Google Scholar 

  10. Lopez-inesta E, Grimaldo F, Arevalillo-Herraez M (2017) Combining feature extraction and expansion to improve classification based similarity learning. Pattern Recognit Lett 93:95–103

    Article  Google Scholar 

  11. Globerson A, Roweis S (2006) Metric learning by collapsing classes. Adv Neural Inf Process Syst 18:451

    Google Scholar 

  12. Goldberger J, Roweis S, Hinton G, Salakhutdinov R (2004) Neighbourhood components analysis. In: International conference on neural information processing systems, pp 513–520

  13. Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244

    MATH  Google Scholar 

  14. Abdel-Basset M, Manogaran G, El-Shahat D et al (2018) A hybrid whale optimization algorithm based on local search strategy for the permutation flow shop scheduling problem. Future Gener Comput Syst 85:129–145

    Article  Google Scholar 

  15. Sangaiah AK, Fakhry AE, Abdel-Basset M, et al. (2018) Arabic text clustering using improved clustering algorithms with dimensionality reduction. Clust Comput 1–15

  16. Abdel-Basset M, Fakhry AE, El-Henawy I et al (2017) Feature and intensity based medical image registration using particle swarm optimization. J Med Syst 41(12):197

    Article  Google Scholar 

  17. Janecek A, Gansterer WN, Demel M, Ecker G (2008) On the relationship between feature selection and classification accuracy. J Mach Learn Res 4:90–105

    Google Scholar 

  18. Lee JA, Verleysen M (2007) Nonlinear dimensionality reduction. Springer, Berlin

    Book  Google Scholar 

  19. Ham JH, Lee DD, Mika S, Schölkopf B (2004) A kernel view of the dimensionality reduction of manifolds. In: Proceedings of the 21st international conference on machine learning, 47

  20. Schölkopf B, Smola A, Müller K-R (1998) Nonlinear component analysis as a Kernel eigenvalue problem. Neural Comput 10:1299–1319

    Article  Google Scholar 

  21. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290:2319–2323

    Article  Google Scholar 

  22. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15:1373–1396

    Article  Google Scholar 

  23. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(80):2323–2326

    Article  Google Scholar 

  24. Wang H, Nie F, Huang H (2014) Globally and locally consistent unsupervised projection. In: Proceedings of the 28th AAAI conference on artificial intelligence

  25. Yang J, Zhang D, Yang J, Niu B (2007) Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics. IEEE Trans Pattern Anal Mach Intell 29(4):650–664

    Article  Google Scholar 

  26. Zhao G, Wu Y, Chen F, Zhang J, Bai J (2015) Effective feature selection using feature vector graph for classification. Neurocomputing 151:376–389

    Article  Google Scholar 

  27. Zhao GD, Liu SM (2016) Estimation of discriminative feature using community modularity. Sci Rep 6:25040. https://doi.org/10.1038/srep25040

    Article  Google Scholar 

  28. He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using Laplacian faces. IEEE Trans Pattern Anal Mach Intell 27(3):328–340

    Article  Google Scholar 

  29. Wan M, Li M, Yang GW, Gai S, Jin Z (2014) Feature extraction using two-dimensional maximum embedding difference. Inf Sci 274:55–69

    Article  Google Scholar 

  30. Muff S, Rao F, Caflisch A (2005) Local modularity measure for network clusterizations. Phys Rev E 72:56107

    Article  Google Scholar 

  31. Zhao GD, Wu Y (2016) Feature subset selection for cancer classification using weight local modularity. Sci Rep 6:34759. https://doi.org/10.1038/srep34759

    Article  Google Scholar 

  32. Garcia V, Debreuve E, Barlaud M (2008) Fast k nearest neighbor search using GPU. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Work, pp 1–6

  33. Dong W, Moses C, Li K (2011) Efficient k-nearest neighbor graph construction for generic similarity measures. In: Proceedings of the 20th international conference World Wide Web—WWW’11, 2011, p 577

  34. Van Rijsbergen CJ (1979) Information retrieval, 2nd edn. Butterworth-Heinemann Newton, MA

    MATH  Google Scholar 

  35. Huang P, Chen C, Tang Z, Yang Z (2014) Discriminant similarity and variance preserving projection for feature extraction. Neurocomputing 139:180–188

    Article  Google Scholar 

  36. Gu XJ, Liu CC, Wang S, Zhao CR, Wu SS (2015) Uncorrelated slow feature discriminant analysis using globality preserving projections for feature extraction. Neurocomputing 168:488–499

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Wu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, G., Wu, Y. Efficient Large Margin-Based Feature Extraction. Neural Process Lett 50, 1257–1279 (2019). https://doi.org/10.1007/s11063-018-9920-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-018-9920-7

Keywords

Navigation