Skip to main content
Log in

Online chaotic time series prediction using unbiased composite kernel machine via Cholesky factorization

  • Original Paper
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

The kernel method has proved to be an effective machine learning tool in many fields. Support vector machines with various kernel functions may have different performances, as the kernels belong to two different types, the local kernels and the global kernels. So the composite kernel, which can bring more stable results and good precision in classification and regression, is an inevitable choice. To reduce the computational complexity of the kernel machine’s online modeling, an unbiased least squares support vector regression model with composite kernel is proposed. The bias item of LSSVR is eliminated by improving the form of structure risk in this model, and then the calculating method of the regression coefficients is greatly simplified. Simultaneously, through introducing the composite kernel to the LSSVM, the model can easily adapt to the irregular variation of the chaotic time series. Considering the real-time performance, an online learning algorithm based on Cholesky factorization is designed according to the characteristic of extended kernel function matrix. Experimental results indicate that the unbiased composite kernel LSSVR is effective and suitable for online time series with both the steep variations and the smooth variations, as it can well track the dynamic character of the series with good prediction precisions, better generalization and stability. The algorithm can also save much computation time comparing to those methods using matrix inversion, although there is a little more loss in time than that with the usage of single kernels.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Angulo C, Català A (2003) Online learning with kernels for smart adaptive systems: a review. In: Proceedings of the European network for intelligent technologies, Oulu

  • Burges CJC (1999) Geometry and invariance in kernel based methods: advance in kernel methods-support vector Learning. MIT Press, Cambridge, pp 89–116

    Google Scholar 

  • Cesa-Bianchi N, Conconi A, Gentile C (2004) On the generalization ability of on-line learning algorithms. IEEE Trans Inf Theory 50(9):2050–2057. doi:10.1109/TIT.2004.833339

    Article  MathSciNet  Google Scholar 

  • Cesa-Bianchi N, Gentile C, Zaniboni L (2006) Incremental algorithms for hierarchical classification. J Mach Learn Res 7:31–54

    MathSciNet  Google Scholar 

  • Crammer K, Dekel O, Keshet J, Shalev-Shwartz S, Singer Y (2006) Online passive-aggressive algorithms. J Mach Learn Res 7:551–585

    MathSciNet  Google Scholar 

  • Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines. Cambridge University Press, Cambridge

    Google Scholar 

  • Engel Y, Shie M, Ron M (2002) Sparse online greedy support vector regression. Proceedings of the European conference on machine learning, Spring-Verlag, Berlin, pp 84–96

  • Felipe AC de Bastos, Marcello LR, de Campos (2005) A fast training algorithm for unbiased proximal SVM. Proceedings of the ICASSP, pp V-245–V-248

  • Fung G (2001) Mangasarian O Proximal support vector machine classifiers, in: Proceedings KDD-2001. knowledge discovery and data mining, San Francisco

  • Fung G, Mangasarian OL (2002) Incremental support vector machine classification. In: Proceedings of the second SIAM international conference, pp 247–260

  • Golub GH (1996) Van-loan CF matrix computations. The Johns Hopkins University Press, Baltimore

    Google Scholar 

  • Gustavo CV, Luis GC, Jordi MM et al (2006) Composite kernels for hyperspectral image classification. IEEE Geosci Remote Sens Lett 3(1):93–97. doi:10.1109/LGRS.2005.857031

    Article  Google Scholar 

  • Gustavo CV, José LRA, Jordi MM (2007) Nonlinear system identification with composite relevance vector machines. IEEE Signal Process Lett 14(4):279–282. doi:10.1109/LSP.2006.885290

    Article  Google Scholar 

  • Hao-ran Z, Xiao-dong W (2006) Incremental and online learning algorithm for regression least square vector machine. Chin J Comput 29(3):400–406

    Google Scholar 

  • Jiang TJ, Wang SZ, Wei RX (2007) Support vector machine with composite kernels for time series prediction. ISNN’07: part III. LNCS 4493:350–356

    Google Scholar 

  • JinHuyk H, SungBae C (2002) Incremental support vector machine for unlabeled data classification. In: Proceedings of the 9th international conference on neural information processing, ICONIP’02, Singapore. vol 3, pp 1403–1407

  • Kivinen J, Smola AJ, Williamson RC (2002) Online learning with kernels. In: Dietterich TG, Becker S, Ghahramani Z (eds) Advances in neural information processing systems, no. 14, The MIT Press, Vancouver

  • Kivinen J, Smola AJ, Williamson RC (2004) Online learning with kernels. IEEE Trans Signal Process 52(8):2165–2176. doi:10.1109/TSP.2004.830991

    Article  MathSciNet  Google Scholar 

  • Lau KW, Wu QH (2003) Online training of support vector classifier. Patten Regres 36(8):1913–1920

    MATH  Google Scholar 

  • Liu QG, He Q, Shi ZZ (2007) Incremental nonlinear proximal support vector machine. In: Proceedings of ISNN’07, vol. 4493, Springer, LNCS, p 336C341

  • Li Y, Long PM (2002) The relaxed online maximum margin algorithm: machine learning. Kluwer, Boston, 46(13) 361–387

  • Ma J, Theiler J, Perkens S (2003) Accurate on-line support vector regression. Neural Comput 15:2683–2703. doi:10.1162/089976603322385117

    Article  MATH  Google Scholar 

  • Martin M (2002) Online support vector machines for function approximation. Politecnica University, Catalunya, Spain, Technical report LSI-02-11-R

  • Müller KR, Mika S, Rätsch G et al (2001) An introduction to kernel-based learning algorithms. IEEE Trans Neural Netw 12:181–201. doi:10.1109/72.914517

    Article  Google Scholar 

  • Navia-Vázquez A, Pérez-Cruz F, Artés-Rodriguez A, Figueiras-Vidal A (2004) Unbiased support vector classifiers, In: Proceedings of the IEEE signal processing society workshop, pp 183–192

  • Navia-Vázquez A, Pérez-Cruz F, Artés-Rodriguez A, Figueiras-Vidal A (2004b) Advantages of unbiased support vector classifiers for data mining applications. J VLSI Signal Process 37:223–235. doi:10.1023/B:VLSI.0000027487.93757.91

    Article  MATH  Google Scholar 

  • Ojeda F, Suykens JAK, Moor BD (2008) Low rank updated ls-svm classifiers for fast variable selection. Neural Netw 21:437–449. doi:10.1016/j.neunet.2007.12.053

    Article  Google Scholar 

  • Ralaivola L et al (2001) Incremental support vector machine learning: a local approach. In: Proceedings of the international on conference on artificial neural networks, Vienna, Austria, pp 322–329

  • Ruping S (2002) Incremental learning with support vector machines. Dortmund University, Dortmund: Technical Report TR-18

  • Schölkopf B, Smola AJ (2002) Learning with kernels. MIT Press, Cambridge

    Google Scholar 

  • Seeger M (2005) Low rank updates for the Cholesky decomposition, Technical report, Max Planck Society, Tuebingen, Germany

  • Shilton A , Palaniswami M, Daniel R, Ah Chung T (2005) Incremental training of support vector machines. IEEE Trans Neural Netw 16(1):114–131. doi:10.1109/TNN.2004.836201

    Google Scholar 

  • Smola AJ (1999) Learning with kernels. Ph.D. thesis, GMD, Birlinghoven

  • Smola AJ, Schölkopf B et al (1998) The connection between regularization operators and support vector kernel. Neural Netw 11(4):637–649. doi:10.1016/S0893-6080(98)00032-X

    Article  Google Scholar 

  • Suykens JAK, Vandewalle J (1999) Least square support vector machines classifiers. Neural Process Lett 9(3):293–300. doi:10.1023/A:1018628609742

    Article  MathSciNet  Google Scholar 

  • Suykens JAK, Vandewalle J (2000) Recurrent least squares support vector machines. IEEE Trans Circ Syst 47(7):1109–1114. doi:10.1109/81.855471

    Article  Google Scholar 

  • Vapnik VN (1995) The nature of statistical learning theory. Springer, New York

    MATH  Google Scholar 

  • Vijayakumar S (1999) Sequential support vector classifiers and regression. Proceedings of the international conference on soft computing, Genoa, Italy, pp 610–619

  • Vishwanathan SVN, Schraudolph NN, Smola AJ (2006) Step size adaptation in reproducing kernel Hilbert space. J Mach Learn Res 7:1107–1133

    MathSciNet  Google Scholar 

  • Wu Q, Ying YM, Zhou DX (2007) Multi-kernel regularized classifiers. J Complex 23:108–134. doi:10.1016/j.jco.2006.06.007

    Article  MATH  MathSciNet  Google Scholar 

  • Yu-gang F, Ping L, Zhi-huan S (2006) Dynamic weighted least square support vector machines. Contr Decis 21(10):1129–1133

    Google Scholar 

  • Zhang M, Zhang J, Su J, Zhou G (2006) A Composite kernel to extract relations between entities with both flat and structured features. In: Proceedings COLING-ACL. 825-832

  • Zhang HR, Zhang CJ, Wang XD et al (2006) A new support vector machine and its learning algorithm. Proceedings of the 6th world congress on control and automation, Dalian, China, pp 2820–2824

  • Zheng DL (2006) Research on kernel methods in machine learning, Ph.D. thesis, Tsinghua University

  • Zheng S, Liu J, Tian JW (2005) An efficient star acquisition method based on SVM with mixtures of kernels. Pattern Recognit Lett 26(2):147–165

    Article  Google Scholar 

  • Zheng DN, Wang JX, Zhao YN (2006a) Non-flat function estimation with a multi-scale support vector regression. Neurocomputing 70:420–429. doi:10.1016/j.neucom.2005.12.128

    Article  Google Scholar 

  • Zheng DN, Wang JX, Zhao YN (2006b) Time series predictions using multi-scale support vector regressions. Lect Notes Comput Sci 3959:474–481. doi:10.1007/11750321_45

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work was jointly supported by the National Science Fund for Distinguished Young Scholars (Grant No: 60625304), the National Natural Science Foundation of China (Grants No: 60621062, 60504003), the National Key Project for Basic Research of China (Grant No: 2007CB311003) and the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No: 20050003049).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongqiao Wang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, H., Sun, F., Cai, Y. et al. Online chaotic time series prediction using unbiased composite kernel machine via Cholesky factorization. Soft Comput 14, 931–944 (2010). https://doi.org/10.1007/s00500-009-0479-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-009-0479-0

Keywords

Navigation