Skip to main content
Log in

Adaptively weighted learning for twin support vector machines via Bregman divergences

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Some versions of weighted (twin) support vector machines have been developed to handle the contaminated data. However, the weights of samples are generally obtained from the prior knowledge of data in advance. This article develops an adaptively weighted twin support vector machine via Bregman divergences. To better handle the contaminated data, we employ an insensitive loss function to control the fitting error of the samples in one class and introduce the weight (fuzzy membership) of each sample into the proposed model. The alternating optimization technique is utilized to solve the proposed model due to the characteristics of the model. The accelerated version of first-order methods is used to solve a quadratic programming problem, and the fuzzy membership of each sample is achieved analytically in the case of Bregman divergences. Experiments on some data sets have been conducted to show that our method gains better classification performance than previous methods, especially for the open set experiment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Vapnik V (1998) The nature of Statistical Learning, 2nd edn. Springer, New York

    MATH  Google Scholar 

  2. Chistianini N, Shawe-Taylor J (2000) An introduction to support vector machines, and other kernel-based learning methods. Cambridge University Press, Cambridge

    Google Scholar 

  3. Scholkopf B, Burges C, Smola A (1999) Advances in kernel methods in support vector learning. MIT Press, Cambridge

    MATH  Google Scholar 

  4. Huang X, Shi L, Suyken JAK (2014) Support vector machine classifier with the pinball loss. IEEE Trans Pattern Anal Mach Intell 36(5):984–997

    Google Scholar 

  5. Liang ZZ, Li Y (2009) Incremental support vector machine learning in the primal and its applications. Neurocomputing 72(10–12):2249–2258

    Google Scholar 

  6. Liang Z, Liu N (2014) Efficient feature scaling for support vector machines with a quadratic kernel. Neural Process Lett 39(3):235–246

    Google Scholar 

  7. Mangasarian O, Wild EW (2006) Multisurface proximal support vector classification via generalized eigenvalues. IEEE Trans Pattern Anal Mach Intell 28(1):69–74

    Google Scholar 

  8. Jayadeva R, Khemchandani R, Chandra S (2007) Twin support vector machine for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–1002

    MATH  Google Scholar 

  9. Peng X (2010) A v-twin support vector machine classifier and its geometric algorithms. Inf Sci 180:3863–3875

    MathSciNet  MATH  Google Scholar 

  10. Peng X (2010) TSVR: an efficient twin support vector machine for regression. Neural Netw 23:365–372

    MATH  Google Scholar 

  11. Peng X (2011) TPMSVM: a novel twin parametric-margin support vector machine for pattern recognition. Pattern Recogn 44(10):2678–2692

    MATH  Google Scholar 

  12. Peng X (2011) Building sparse twin support vector machine classifiers in primal space. Inf Sci 81(18):3967–3980

    Google Scholar 

  13. Xu Y, Xi W, Lv X, Guo R (2012) An improved least squares twin support vector machine. J Inform Comput Sci 9:1063–1071

    Google Scholar 

  14. Xu Y, Guo R (2014) An improved nu-twin support vector machine. Appl Intell 41(1):42–54

    Google Scholar 

  15. Xu Y, Wang L (2012) A weighted twin support vector regression. Knowl Based Syst 33:92–101

    MathSciNet  Google Scholar 

  16. Chen X, Yang J, Ye Q, Liang J (2011) Recursive projection twin support vector machine via within-class variance minimization. Pattern Recogn 44(10):2643–2655

    MATH  Google Scholar 

  17. Shao YH, Deng NY, Yang ZM, Chen WJ, Wang Z (2012) Probabilistic outputs for twin support vector machines. Know Based Syst 33(1):45–51

    Google Scholar 

  18. Shao YH, Deng NY, Yang ZM (2012) Least squares recursive projection twin support vector machine for classification. Pattern Recogn 45(6):299–307

    MATH  Google Scholar 

  19. Xu Y, Wang L (2014) K-nearest neighbor-based weighted twin support vector regression. Appl Intell 40:1–11

    Google Scholar 

  20. Peng X, Xu D (2012) Twin Mahalanobis distance-based support vector machines for pattern recognition. Inf Sci 200:22–37

    MathSciNet  MATH  Google Scholar 

  21. Khemchandani R, Goyal K, Chandra S (2016) TWSVR: regression via twin support vector machine. Neural Netw 74:14–21

    MATH  Google Scholar 

  22. Khemchandani R, Saigal P, Chandra S (2016) Improvements on ν-twin support vector machine. Neural Netw 79:97–107

    Google Scholar 

  23. Wang XB, Deng NY (2011) Improvements on twin support vector machines. IEEE Trans Neural Netw 22(6):962–968

    Google Scholar 

  24. Kumar MA, Gopal M (2009) Least squares twin support vector machines for pattern classification. Expert Syst Appl 36:35–43

    Google Scholar 

  25. Kumar MA, Gopal M (2008) Application of smoothing technique on twin support vector machines. Pattern Recogn Lett 29(13):1842–1848

    Google Scholar 

  26. Tian Y, Ju X, Qi Z, Shi Y (2014) Improved twin support vector machine. Sci China Math 57(2):417–432

    MathSciNet  MATH  Google Scholar 

  27. Zhao YP, Zhao J, Zhao M (2013) Twin least squares support vector regression. Neurocomputing 118:225–236

    Google Scholar 

  28. Singh M, Chadha J, Ahuja P, Jayadeva R, Chandra S (2011) Reduced twin support vector regression. Neurocomputing 74:1474–1477

    Google Scholar 

  29. Chen S, Wu X, Zhang R (2016) Novel twin support vector machines for binary classification problems. Neural Process Lett 44(3):795–811

    Google Scholar 

  30. Jayadeva R, Khemchandani R, Suresh C (2017) Twin support vector machines: models, extensions and applications. Stud Comput Intell 659:1–206

    MATH  Google Scholar 

  31. Qi Z, Tian Y, Shi Y (2013) Robust twin support vector machine for pattern classification. Pattern Recogn 46:305–316

    MATH  Google Scholar 

  32. Lin CF, Wang SD (2002) Fuzzy support vector machines. IEEE Trans Neural Netw 13(2):464–471

    Google Scholar 

  33. An W, Liang M (2013) Fuzzy support vector machines based on within-class scatter for classification problem with outliers or noises. Neurocomputing 110:101–110

    Google Scholar 

  34. Wu Y, Liu Y (2007) Robust truncated hinge loss support vector machines. Jo Am Stat Assoc 102(479):974–983

    MathSciNet  MATH  Google Scholar 

  35. Ye Q, Zhao C, Gao S, Zheng H (2012) Weighted twin support vector machines with local information and its application. Neural Netw 35:31–39

    MATH  Google Scholar 

  36. Gao S, Ye Q, Ye N (2011) 1-Norm least squares twin support vector machines. Neurocomputing 74(17):3590–3597

    Google Scholar 

  37. Tomar D, Singhal S, Agarwal S (2014) Weighted least square twin support vector machine for imbalanced dataset. Int J Database Theory Appl 7:25

    Google Scholar 

  38. Chen SG, Weu XJ (2017) Fuzzy twin support vector machines for pattern classification. Int J Mach Learn Cybern 2:336–384

    Google Scholar 

  39. Liang Z, Li YF, Xia S (2013) Adaptive weighted learning for linear regression problems via Kullback-Leibler divergence. Pattern Recogn 46:1209–1219

    MATH  Google Scholar 

  40. Arindam B, Srujana M, Inderjit SD, Joydeep G (2005) Clustering with Bregman divergences. J Mach Learn Res 6:1705–1749

    MathSciNet  MATH  Google Scholar 

  41. Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problem. SIAM J Imag Sci 2(1):183–202

    MathSciNet  MATH  Google Scholar 

  42. Hong M, Razaviyayn M, Luo ZQ, Pang JS (2016) A unified algorithmic framework for block-structured optimization involving big data: with applications in machine learning and signal processing. IEEE Signal Process Mag 33(1):57–77

    Google Scholar 

  43. Razaviyayn M, Hong M, Luo ZQ (2013) A unified convergence analysis of block successive minimization methods for nonsmooth Optimization. SIAM J Optim 23(2):1126–1153

    MathSciNet  MATH  Google Scholar 

  44. Tseng P (2001) Convergence of a block coordinate descent method for nondifferential minimization. J Optim Theory Appl 109(3):475–479

    MathSciNet  MATH  Google Scholar 

  45. Scheirer W, Rocha A, Sapkota A, Boult T (2013) Toward open set recognition. IEEE Trans PAMI 35(7):1757–1772

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the Fundamental Research Funds for the Central Universities (2015XKMS084).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhizheng Liang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, Z., Zhang, L., Liu, J. et al. Adaptively weighted learning for twin support vector machines via Bregman divergences. Neural Comput & Applic 32, 3323–3336 (2020). https://doi.org/10.1007/s00521-018-3843-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3843-0

Keywords

Navigation