Skip to main content
Log in

Multiclass multiple kernel learning using hypersphere for pattern recognition

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Confronting with plenty of information from multiple targets and multiple sources, it is core and important to learn and decide for the bionic brain or robot within limited capacity in the bionic-technology field. Several multiclass multiple kernel learning algorithms are proposed instead of a single one, which can not only combine multiple kernels corresponding to different notions of similarity or information from multiple feature subsets, but also avoid kernel parameters selecting and fuse distinctions of multiple kernels. The core of these algorithms is the small sphere and large margin approach with hypersphere boundary, which takes the advantages of support vector machine (SVM) and support vector data description (SVDD), making the volume of sphere as small as well and the margin as large as possible, in other words, minimizing the within-class divergence like SVDD and maximizing the between-class margin like SVM. Meanwhile, The one-class essence of SSLM can relieve problem of the data imbalance. Besides, the one-against-all strategy is adopted for multiclass recognition. Hence, there will be a remarkable improvement of recognition accuracy. Numerical experiments based on three publicly UCI datasets demonstrate that using multiple kernels instead of a single one is useful and promising. These MMKL algorithms are ideal for classification and recognition of multiple targets and sources in artificial intelligence field.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Jessica S, Janaina M, Christophe P, Josef P (2016) Decoding intracranial EEG data with multiple kernel learning method. J Neurosci Meth 261:19–28

    Article  Google Scholar 

  2. Zhang X, Hu L (2016) A nonlinear subspace multiple kernel learning for financial distress prediction of Chinese listed companies. Neurocomputing 177:636–642

    Article  Google Scholar 

  3. Bucak SS, Jin R, Jain AK (2014) Multiple kernel learning for visual object recognition: a review. IEEE Trans Pattern Anal 36:1354–1369

    Article  Google Scholar 

  4. Wang Q, Gu Y, Tuia D (2016) Discriminative multiple kernel learning for hyperspectral image classification. IEEE Trans Geosci Remote Sens 54:3912–3927

    Article  Google Scholar 

  5. Diosan L, Rogozan A, Pecuchet J (2012) Improving classification performance of Support Vector Machine by genetically optimising kernel shape and hyper-parameters. Appl Intell 36:280–294

    Article  Google Scholar 

  6. Maldonado S, Lopez J (2017) Robust kernel-based multiclass support vector machines via second-order cone programming. Appl Intell 46(4):983–992

    Article  Google Scholar 

  7. Kim M (2013) Accelerated max-margin multiple kernel learning. Appl Intell 38:45–57

    Article  Google Scholar 

  8. Gonen M, Alpaydm E (2011) Multiple kernel learning algorithms. J Mach Learn Res 12:2211–2268

    MathSciNet  MATH  Google Scholar 

  9. Peng Z, Gurram P, Kwon H, Yin W (2015) Sparse kernel learning-based feature selection for anomaly detection. IEEE Trans Aero Elec Sys 51:1698–1716

    Article  Google Scholar 

  10. Huang H, Chuang Y, Chen C (2012) Multiple kernel fuzzy clustering. IEEE Trans Fuzzy Syst 20:120–134

    Article  Google Scholar 

  11. Liu X, Wang L, Huang G, Zhang J, Yin J (2015) Multiple kernel extreme learning machine. Neurocomputing 149:253–264

    Article  Google Scholar 

  12. Hao P, Chiang J, Lin Y (2009) A new maximal-margin spherical-structured multi-class support vector machine. Appl Intell 30:98–111

    Article  Google Scholar 

  13. Wu M, Ye J (2009) A small sphere and large margin approach for novelty detection using training data with outliers. IEEE Trans Pattern Anal 31:2088–2092

    Article  Google Scholar 

  14. Rakotomamonjy A, Bach FR, Canu S, Grandvalet Y (2008) SimpleMKL. J Mach Learn Res 9:2491–2521

    MathSciNet  MATH  Google Scholar 

  15. Lopez J, Maldonado S, Carrasco M (2016) A novel multi-class SVM model using second-order cone constraints. Appl Intell 44:457–469

    Article  Google Scholar 

  16. Nello C, John S (2000) An introduction to support vector maahines and other kernel-based learning methods. Cambridge University Press, Cambridge

    Google Scholar 

  17. Nello C, John S, Andre E, Jaz K (2010) On kernel-target alignment. Adv Neur Inf Proces Syst 179:367–373

    Google Scholar 

  18. Qiu S, Lane T (2009) A framework for multiple kernel support vector regression and its applications to siRNA efficacy prediction. IEEE ACM Trans Comput Bi 6:190–199

    Google Scholar 

  19. Gert RGL, Nello C, Peter B, Laurent EG, Michael IJ (2004) Learning the kernel matrix with semidefinite programming. J Mach Learn Res 5:27–72

    MathSciNet  MATH  Google Scholar 

  20. Cortes C, Mohri M, Rostamizadeh A (2012) Algorithms for learning kernels based on centered alignment. J Mach Learn Res 13:795–828

    MathSciNet  MATH  Google Scholar 

  21. Mu T, Nandi AK (2009) Multiclass classification based on extended support vector data description. IEEE Trans Syst Man Cy B 39:1206–1216

    Article  Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge the helpful comments and suggestions of reviewers. This work was supported by National Natural Science Foundation of China (No.61372159).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Guo.

Appendix

Appendix

1.1 Proof of (17)

Proof

The solution of coefficients can be rewritten as \({{\eta }^{*}}=\underset {{{\left \| \eta \right \|}_{2}}= 1}{\mathop {\arg \max }}\,\frac {{{\left ({{\eta }^{T}}a \right )}^{2}}}{{{\eta }^{T}}M\eta }\).

Let u = M 1/2 η,u = M 1/2 η ,then

$$\begin{array}{@{}rcl@{}} {{\eta }^{*}}&=&\underset{{{\left\| \eta \right\|}_{2}}= 1}{{\arg \max }}\,\frac{{{\eta }^{T}}a{{a}^{T}}\eta }{{{\eta }^{T}}M\eta }\Rightarrow {{u}^{*}}\\ &=&\underset{{{\left\| {{M}^{-{1}/{2}\;}}u \right\|}_{2}}= 1}{{\arg \max }}\,\frac{{{u}^{T}}\left[ {{M}^{{-1}/{2}\;}}a{{a}^{T}}{{M}^{{-1}/{2}\;}} \right]u}{{{u}^{T}}u} \end{array} $$
(35)

Rewrite (35) as:

$$\begin{array}{@{}rcl@{}} {{u}^{*}}&=&\underset{{{\left\| {{M}^{-{1}/{2}\;}}u \right\|}_{2}}= 1}{{\arg \max }}\,\frac{{{\left[ {{u}^{T}}{{M}^{{-1}/{2}\;}}a \right]}^{2}}}{{{\left\| u \right\|}^{2}}}\\ &=&\underset{{{\left\| {{M}^{-{1}/{2}\;}}u \right\|}_{2}}= 1}{{\arg \max }}\,{{\left[ {{\left( \frac{u}{\left\| u \right\|} \right)}^{T}}{{M}^{{-1}/{2}\;}}a \right]}^{2}} \end{array} $$
(36)

Hence, u V e c(M − 1/2 a),with ∥M − 1/2 u 2 = 1, whichleads to \({{\eta }^{*}}=\frac {{{M}^{{-1}/{2}\;}}a}{\left \| {{M}^{{-1}/{2}\;}}a \right \|}\).□

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, Y., Xiao, H. Multiclass multiple kernel learning using hypersphere for pattern recognition. Appl Intell 48, 2746–2754 (2018). https://doi.org/10.1007/s10489-017-1111-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-017-1111-0

Keywords

Navigation