Skip to main content
Log in

Forming Ensembles of Soft One-Class Classifiers with Weighted Bagging

  • Published:
New Generation Computing Aims and scope Submit manuscript

Abstract

For many real-life problems obtaining representative examples from a given class is relatively easy, while for the remaining ones are difficult, or even impossible. However, we would still like to construct a pattern classifier that could distinguish between the known and unknown cases. In such cases we are dealing with one-class classification, or learning in the absence of counterexamples. Such recognition systems must display a high robustness to new, unseen objects that may belong to an unknown class. That is why ensemble learning has become an attractive perspective in this field. In our work, we propose a novel one-class ensemble classifier, based on weighted Bagging. Wagging method is used to obtain randomized weights and utilize them directly in the process of training Weighted One-Class Support Vector Machines. This introduces a diversity into the pool of one-class classifiers and extends the competence of formed ensemble. Additionally, to discard similar or weak classifiers we propose to add a clustering-based pruning procedure to our ensemble. It works on the basis of similarities between weights used by each base model and detecting groups of similar predictors. This allows us to reduce the number of classifiers in the pool by selecting a single representative for each cluster. Experimental analysis, carried out on a number of benchmarks and backed-up with statistical analysis proves that the proposed method can outperform state-of-the-art ensembles dedicated to one-class classification.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Alpaydin, E., “Combined 5 ×  2 cv f test for comparing supervised classification learning algorithms,” Neural Computation, 11, 8, pp. 1885–1892, 1999.

  2. Barandela, R., Sánchez, J. S., García, V. and Rangel, E., “Strategies for learning in class imbalance problems,” Pattern Recognition, 36, 3, pp. 849–851, 2003.

  3. Bauer, E. and Kohavi, R., “An empirical comparison of voting classification algorithms: Bagging, boosting, and variants,” Machine Learning, 36, 1-2, pp. 105–139, 1999.

    Article  Google Scholar 

  4. Bicego, M. and Figueiredo, M. A. T., “Soft clustering using weighted one-class support vector machines,” Pattern Recognition, 42, 1, pp. 27–32, 2009.

    Article  MATH  Google Scholar 

  5. Bishop, C. M., “Novelty detection and neural network validation,” IEE Proceedings: Vision, Image and Signal Processing, 141, 4, pp. 217–222, 1994.

    Google Scholar 

  6. Cabral, G. G. and Oliveira, A. L. I., “One-class classification based on searching for the problem features limits,” Expert Systems with Applications, 41, 16, pp. 7182–7199, 2014.

    Article  Google Scholar 

  7. Cheplygina, V. and Tax, D. M. J., “Pruned random subspace method for one-class classifiers,” Multiple Classifier Systems, LNCS, 6713, pp. 96–105, 2011.

    Article  Google Scholar 

  8. Cyganek, B., “Image segmentation with a hybrid ensemble of one-class support vector machines,” LNAI of LNCS (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6076, pp. 254–261, 2010.

    Google Scholar 

  9. Cyganek, B., “One-class support vector ensembles for image segmentation and classification,” Journal of Mathematical Imaging and Vision, 42, 2-3, pp. 103–117, 2012.

    Article  MATH  MathSciNet  Google Scholar 

  10. Czarnecki, W. M., “Weighted tanimoto extreme learning machine with case study in drug discovery,” IEEE Comp. Int. Mag., 10, 3, pp. 19–29, 2015.

    Article  Google Scholar 

  11. Czarnowski, I. and Jedrzejowicz, P., “Ensemble classifier for mining data streams,” in 18th International Conference in Knowledge Based and Intelligent Information and Engineering Systems, KES 2014, Gdynia, Poland, 15-17 September 2014, pp. 397–406, 2014.

  12. Czarnowski, I. and Jedrzejowicz, P., “Ensemble online classifier based on the one-class base classifiers for mining data streams,” Cybernetics and Systems, 46, 1-2, pp. 51–68, 2015.

    Article  Google Scholar 

  13. Demsar, J., “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, 7, pp. 1–30, 2006

    MATH  MathSciNet  Google Scholar 

  14. Desir, C., Bernard, S., Petitjean, C. and Heutte, L. “One class random forests,” Pattern Recognition, 46, 12, pp. 3490–3506, 2013.

    Article  Google Scholar 

  15. García, S., Fernandez, A., Luengo, J. and Herrera, F., “Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power,” Inf. Sci., 180, 10, pp. 2044–2064, 2010.

    Article  Google Scholar 

  16. Gardner, A. B., Krieger, A. M., Vachtsevanos, G. and Litt, B., “One-class novelty detection for seizure analysis from intracranial eeg,” Journal of Machine Learning Research, 7, pp. 1025–1044, 2006.

    MATH  MathSciNet  Google Scholar 

  17. Jackowski, K. and Platos, J., “Application of adass ensemble approach for prediction of power plant generator tension,” in International Joint Conference SOCO14-CISIS14-ICEUTE14 (de la Puerta, J. G., Ferreira, I. G., Bringas, P. G., Klett, F., Abraham, A., de Carvalho, A. C. P. L. F., Herrero, l., Baruque, B., Quintin, H. and Corchado, E., eds.), Advances in Intelligent Systems and Computing, 299, Springer International Publishing, pp. 207–216, 2014.

  18. Jiang, H., Liu, G., Xiao, X., Mei, C., Ding, Y. and Yu, S., “Monitoring of solid-state fermentation of wheat straw in a pilot scale using ft-nir spectroscopy and support vector data description,” Microchemical Journal, 102, 2012.

  19. Juszczak, P., Tax, D. M. J., Pekalska, E. and Duin, R. P. W., “Minimum spanning tree based one-class classifier,” Neurocomputing, 72, 7-9, pp. 1859–1869, 2009.

    Article  Google Scholar 

  20. Koch, M. W., Moya, M. M., Hostetler, L. D. and Fogler, R. J., “Cueing feature discovery, and one-class learning for synthetic aperture radar automatic target recognition,” Neural Networks, 8, 7-8, pp. 1081–1102, 1995.

    Article  Google Scholar 

  21. Krawczyk, B., “One-class classifier ensemble pruning and weighting with firefly algorithm,” Neurocomputing, 150, pp. 490–500, 2015.

    Article  Google Scholar 

  22. Krawczyk, B. and Woźniak, M., “Diversity measures for one-class classifier ensembles,” Neurocomputing, 126, pp. 36–44, 2014.

    Article  Google Scholar 

  23. Krawczyk, B. and Woźniak, M. and Cyganek, B., “Clustering-based ensembles for one-class classification,” Inf. Sci., 264, pp. 182–195, 2014.

    Article  Google Scholar 

  24. Krawczyk, B. and Woźniak, M. and Herrera, F., “On the usefulness of one-class classifier ensembles for decomposition of multi-class problems,” Pattern Recognition, 48, 12, pp. 3969–3982, 2015.

    Article  Google Scholar 

  25. Krell, M. M. and Wöhrle, H. “New one-class classifiers based on the origin separation approach,” Pattern Recognition Letters, 53, pp. 93–99, 2015.

    Article  Google Scholar 

  26. Kurzynski, M. and Woźniak, M., “Combining classifiers under probabilistic models: experimental comparative analysis of methods,” Expert Systems, 29, 4, pp. 374–393, 2012.

    Article  Google Scholar 

  27. Manevitz, L. and Yousef, M., “One-class document classification via neural networks,” Neurocomputing, 70, 7-9, pp. 1466–1481, 2007.

    Article  Google Scholar 

  28. Moya, M. and Hush, D., “Network constraints and multi-objective optimization for one-class classification,” Neural Networks, 9, 3, pp. 463–474, 1996.

    Article  Google Scholar 

  29. Parhizkar, E. and Abadi, M., “Beeowa: A novel approach based on ABC algorithm and induced OWA operators for constructing one-class classifier ensembles,” Neurocomputing, 166, pp. 367–381, 2015.

    Article  Google Scholar 

  30. Pelleg, D. and Moore, A. W., “X-means: Extending k-means with efficient estimation of the number of clusters, in Proc. of the Seventeenth International Conference on Machine Learning (ICML 2000), Stanford University, Stanford, CA, USA, June 29 - July 2, 2000, pp. 727–734, 2000.

  31. Pelleg, D. and Moore, A. W., “Accelerating exact k-means algorithms with geometric reasoning,” in KDD, pp. 277–281, 1999.

  32. Schölkopf, B. and Smola, A. J., Learning with kernels: support vector machines, regularization, optimization, and beyond, Adaptive computation and machine learning, MIT Press, 2002.

  33. Tax, D. M. J., Juszczak, P., Pekalska, E. and Duin, R. P. W., “Outlier detection using ball descriptions with adjustable metric,” in Proc. of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition, SSPR’06/SPR’06, pp. 587–595, Berlin, Heidelberg, Springer-Verlag, 2006.

  34. Tax, D. M. J. and Duin, R. P. W., “Characterizing one-class datasets,” in Proc. of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, pp. 21–26, 2005.

  35. Wilk, T. and Woźniak, M., “Complexity and multithreaded implementation analysis of one class-classifiers fuzzy combiner,” in Hybrid Artificial Intelligent Systems (Corchado, E., Kurzynski, M. and Woźniak, M., eds.), LNCS, 6679, Springer Berlin, Heidelberg, pp. 237–244, 2011.

  36. Wilk, T. and Woźniak, M., “Soft computing methods applied to combination of one-class classifiers,” Neurocomput., 75, pp. 185–193, January 2012.

    Article  Google Scholar 

  37. Woźniak, M., Grana, M. and Corchado, E., “A survey of multiple classifier systems as hybrid systems,” Information Fusion, 16, 1, pp. 3–17, 2014.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bartosz Krawczyk.

Additional information

This work was partially supported by The Polish National Science Centre under the grant PRELUDIUM number DEC-2013/09/N/ST6/03504

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Krawczyk, B. Forming Ensembles of Soft One-Class Classifiers with Weighted Bagging. New Gener. Comput. 33, 449–466 (2015). https://doi.org/10.1007/s00354-015-0406-0

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00354-015-0406-0

Keywords

Navigation