Skip to main content
Log in

Boosting-SVM: effective learning with reduced data dimension

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Learning problems over high dimensional data are common in real world applications. In this study, a challenging, large and lifelike database, the German traffic sign benchmark data, containing 43 classes and 51840 images, is used to demonstrate the strength of our proposed boosted support vector machine with deep learning architecture. Recognition of traffic signs is difficult, and it involves multiple categories, contains subsets of classes that may appear very similar to each other, and tends to have large variations within class in visual appearances due to illumination changes, partial occlusions, rotations and weather conditions. By combining a low variance error boosting algorithm, a low bias error support vector machine and deep learning architecture, an efficient and effective boosting support vector machine method is presented. It has been shown to greatly reduce data dimension and build classification models with higher prediction accuracy while utilizing fewer features and training instances. In evaluation, the proposed method outperforms Adaboost.M1, cw-Boost, and support vector machine, and it achieves ultra fast processing time (0.0038 per prediction) and high accuracy (93.5 %) on prediction of separate test data utilizes less than 35 % of the training instances. Moreover, the method is applicable to a standard standalone PC without requiring super computers with enormous memory spaces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Biba L, Ferilli S, Esposito F (2011) Boosting learning and inference in Markov logic through metaheuristics. Appl Intell 34:279–298

    Article  Google Scholar 

  2. Breiman L (1996) Bias, variance, and arcing classifiers. Technical report 460, Statistics Department, UC, Berkeley

  3. Breiman L (1996) Bagging predictors. Int J Mach Learn 24:134–140

    MathSciNet  Google Scholar 

  4. Chen K, Salman A (2011) Learning speaker-specific characteristics with a deep neural architecture. IEEE Trans Neural Netw 22(11):1744–1756

    Article  Google Scholar 

  5. Chung I, Huang C, Shen Y, Lin C (2003) Recognition of structure classification of protein folding by NN and SVM hierarchical learning architecture. In: Artificial neural networks and neural information processing. Lecture notes in computer science, vol 2741, pp 179–186

    Google Scholar 

  6. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE conference on computer vision and pattern recognition, pp 886–893

    Google Scholar 

  7. Freund Y, Schapire R (1996) Experiments with a new boosting algorithm. In: Proceedings of the thirteenth international conference on machine learning, San Francisco, pp 148–156

    Google Scholar 

  8. He H, Lin B (2010) A hierarchical learning architecture with multiple-goal representations based on adaptive dynamic programming. In: International conference on networking, sensing and control, pp 286–291

    Google Scholar 

  9. Huang C, Lin C, Pal NR (2003) Hierarchical learning architecture with automatic feature selection for multiclass protein fold classification. IEEE Trans NanoBiosci 2(4):221–232

    Article  Google Scholar 

  10. Keerthi SS, Shevade SK, Bhattacharyya C, Murthy KRK (2001) Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput 13(3):637–649

    Article  MATH  Google Scholar 

  11. Khor K, Ting C, Phon-Amnuaisuk S (2012) A cascaded classifier approach for improving detection rates on rare attack categories in network intrusion detection. Appl Intell 36:320–329

    Article  Google Scholar 

  12. Quinlan JR (1993) C4.5: programs for machine learning, vol 16. Morgan Kaufmann, San Francisco

    Google Scholar 

  13. Quinlan JR (1996) Bagging. Boosting and C4 5. In: Thirteenth national conference on artificial intelligence, pp 725–730

    Google Scholar 

  14. Stallkamp J, Schlipsing M, Salmen J, Igel C (2011) The German traffic sign recognition benchmark: a multi-class classification competition. In: IEEE international joint conference on neural networks, pp 1453–1460

    Google Scholar 

  15. Stallkamp J, Schlipsing M, Salmen J, Igel C (2012) Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. In: Neural networks

    Google Scholar 

  16. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: IEEE computer society conference on computer vision and pattern recognition, vol 1, pp 511–518

    Google Scholar 

  17. Viola P, Jones M (2001) Robust real-time object detection. In: Second international workshop on theories of visual modelling, learning, computing and sampling

    Google Scholar 

  18. Wang C, Hunter A (2010) A low variance error boosting algorithm. Appl Intell 33:357–369

    Article  Google Scholar 

  19. Witten IH, Frank E (2005) Data mining: practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann series in data management systems. Morgan Kaufmann, San Mateo

    Google Scholar 

Download references

Acknowledgement

The authors thank the anonymous reviewers for their valuable comments. The authors thank the anonymous reviewers for their valuable comments, and this work is partially supported by the national science council (NSC- 101-2628-E-011-006-MY3).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ching-Wei Wang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, CW., You, WH. Boosting-SVM: effective learning with reduced data dimension. Appl Intell 39, 465–474 (2013). https://doi.org/10.1007/s10489-013-0425-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-013-0425-9

Keywords

Navigation