Skip to main content

Advertisement

Batch-mode active ordinal classification based on expected model output change and leadership tree

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

While numerous batch-mode active learning (BMAL) methods have been developed for nominal classification, the absence of a BMAL method tailored for ordinal classification is conspicuous. This paper focuses on proposing an effective BMAL method for ordinal classification and argues that a BMAL method should guarantee that the selected instances in each iteration are highly informative, diverse from labeled instances, and diverse from each other. We first introduce an expected model output change criterion based on the kernel extreme learning machine-based ordinal classification model and demonstrate that the criterion is a composite containing both informativeness assessment and diversity assessment. Selecting instances with high scores of this criterion can ensure that the selected are highly informative and diverse from labeled instances. To ensure that the selected instances are diverse from each other, we propose a leadership tree-based batch instance selection approach, drawing inspiration from density peak clustering algorithm. Thus, our BMAL method can select a batch of peak-scoring points from different high-scoring regions in each iteration. The effectiveness of the proposed method is empirically examined through comparisons with several state-of-the-art BMAL methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Algorithm 1
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The datasets and code are available at https://github.com/DeniuHe/EMOC_LT.

References

  1. Gutiérrez PA, Pérez-Ortiz M, Sánchez-Monedero J, Fernández-Navarro F, Hervás-Martínez C (2016) Ordinal regression methods: survey and experimental study. IEEE Trans Knowl Data Eng 28(1):127–146. https://doi.org/10.1109/TKDE.2015.2457911

    Article  MATH  Google Scholar 

  2. Shi Y, Li P, Yuan H, Miao J, Niu L (2019) Fast kernel extreme learning machine for ordinal regression. Knowl Based Syst 177:44–54. https://doi.org/10.1016/J.KNOSYS.2019.04.003

    Article  MATH  Google Scholar 

  3. He D (2022) Active learning for ordinal classification based on expected cost minimization. Sci Rep 12(1):22468. https://doi.org/10.1038/s41598-022-26844-1

    Article  Google Scholar 

  4. Kumar P, Gupta A (2020) Active learning query strategies for classification, regression, and clustering: a survey. J Comput Sci Technol 35(4):913–945. https://doi.org/10.1007/S11390-020-9487-4

    Article  MATH  Google Scholar 

  5. Riccardi A, Fernández-Navarro F, Carloni S (2014) Cost-sensitive adaboost algorithm for ordinal regression based on extreme learning machine. IEEE Trans Cybern 44(10):1898–1909. https://doi.org/10.1109/TCYB.2014.2299291

    Article  MATH  Google Scholar 

  6. Freytag A, Rodner E, Denzler J (2014) Selecting influential examples: active learning with expected model output changes. In: Proceedings of the 13th european conference on computer vision, vol 8692. Springer, Zurich, Switzerland, pp 562–577. https://doi.org/10.1007/978-3-319-10593-2_37

  7. Rodriguez A, Laio A (2014) Clustering by fast search and find of density peaks. Science 344(6191):1492–1496. https://doi.org/10.1126/science.124207

    Article  MATH  Google Scholar 

  8. Scheffer T, Decomain C, Wrobel S (2001) Active hidden markov models for information extraction. In: Proceedings of the 4th International conference on intelligent data analysis, vol. 2189. Springer, Cascais, Portugal, pp 309–318. https://doi.org/10.1007/3-540-44816-0_31

  9. Culotta A, McCallum A (2005) Reducing labeling effort for structured prediction tasks. In: Proceedings of the twentieth national conference on artificial intelligence and the seventeenth innovative applications of artificial intelligence conference. AAAI Press / The MIT Press, Pittsburgh, Pennsylvania, USA, pp 746–751

  10. Jing F, Li M, Zhang H, Zhang B (2004) Entropy-based active learning with support vector machines for content-based image retrieval. In: Proceedings of the 2004 IEEE International conference on multimedia and expo. Taipei, Taiwan, pp 85–88. https://doi.org/10.1109/ICME.2004.1394131

  11. Vandoni J, Aldea E, Hégarat-Mascle SL (2019) Evidential query-by-committee active learning for pedestrian detection in high-density crowds. Int J Approx Reason 104:166–184. https://doi.org/10.1016/J.IJAR.2018.11.007

    Article  MathSciNet  MATH  Google Scholar 

  12. Roy N, McCallum A (2001) Toward optimal active learning through sampling estimation of error reduction. In: Proceedings of the eighteenth international conference on machine learning. Morgan Kaufmann, Williamstown, MA, USA, pp 441–448

  13. Cai W, Zhang M, Zhang Y (2017) Batch mode active learning for regression with expected model change. IEEE Trans Neural Networks Learn Syst 28(7):1668–1681. https://doi.org/10.1109/TNNLS.2016.2542184

    Article  MathSciNet  MATH  Google Scholar 

  14. Park SH, Kim SB (2020) Robust expected model change for active learning in regression. Appl Intell 50(2):296–313. https://doi.org/10.1007/S10489-019-01519-Z

    Article  MATH  Google Scholar 

  15. Miller K, Bertozzi AL (2024) Model-change active learning in graph-based semi-supervised learning. Commun Appl Math Comput. https://doi.org/10.1007/s42967-023-00328-z

    Article  MATH  Google Scholar 

  16. Käding C, Freytag A, Rodner E, Perino A, Denzler J (2016) Large-scale active learning with approximations of expected model output changes. In: Proceedings of the 38th German Conference on Pattern Recognition, vol. 9796. Hannover, Germany, pp 179–191. https://doi.org/10.1007/978-3-319-45886-1_15

  17. Cao X (2020) A divide-and-conquer approach to geometric sampling for active learning. Expert Syst Appl 140. https://doi.org/10.1016/J.ESWA.2019.112907

  18. Wang X, Huang Y, Liu J, Huang H (2018) New balanced active learning model and optimization algorithm. In: Proceedings of the twenty-seventh international joint conference on artificial intelligence. Stockholm, Sweden, pp 2826–2832. https://doi.org/10.24963/IJCAI.2018/392

  19. Li C, Mao K, Liang L, Ren D, Zhang W, Yuan Y, Wang G (2021) Unsupervised active learning via subspace learning. In: Proceedings of the AAAI conference on artificial intelligence, Virtual Event, pp 8332–8339. https://doi.org/10.1609/AAAI.V35I9.17013

  20. Wu D, Lin C, Huang J (2019) Active learning for regression using greedy sampling. Inf Sci 474:90–105. https://doi.org/10.1016/J.INS.2018.09.060

    Article  MathSciNet  MATH  Google Scholar 

  21. Wang Z, Fang X, Tang X, Wu C (2018) Multi-class active learning by integrating uncertainty and diversity. IEEE Access 6:22794–22803. https://doi.org/10.1109/ACCESS.2018.2817845

    Article  MATH  Google Scholar 

  22. Park SH, Kim SB (2019) Active semi-supervised learning with multiple complementary information. Expert Syst Appl 126:30–40. https://doi.org/10.1016/J.ESWA.2019.02.017

    Article  MATH  Google Scholar 

  23. Hoi SCH, Jin R, Lyu MR (2009) Batch mode active learning with applications to text categorization and image retrieval. IEEE Trans Knowl Data Eng 21(9):1233–1248. https://doi.org/10.1109/TKDE.2009.60

    Article  MATH  Google Scholar 

  24. Sener O, Savarese S (2018) Active learning for convolutional neural networks: A core-set approach. In: Proceedings of the 6th international conference on learning representations. OpenReview.net, Vancouver, BC, Canada

  25. Yang Y, Ma Z, Nie F, Chang X, Hauptmann AG (2015) Multi-class active learning by uncertainty sampling with diversity maximization. Int J Comput Vis 113(2):113–127. https://doi.org/10.1007/S11263-014-0781-X

    Article  MathSciNet  MATH  Google Scholar 

  26. Cardoso TNC, Silva RM, Canuto SD, Moro MM, Gonçalves MA (2017) Ranked batch-mode active learning. Inf Sci 379:313–337. https://doi.org/10.1016/J.INS.2016.10.037

    Article  Google Scholar 

  27. Wang Z, Ye J (2015) Querying discriminative and representative samples for batch mode active learning. ACM Trans Knowl Discov Data 9(3):1–23. https://doi.org/10.1145/2700408

    Article  MATH  Google Scholar 

  28. Wang Z, Du B, Zhang L, Zhang L (2016) A batch-mode active learning framework by querying discriminative and representative samples for hyperspectral image classification. Neurocomputing 179:88–100. https://doi.org/10.1016/J.NEUCOM.2015.11.062

    Article  MATH  Google Scholar 

  29. Li H, Wang Y, Li Y, Xiao G, Hu P, Zhao R (2021) Batch mode active learning via adaptive criteria weights. Appl Intell 51(6):3475–3489. https://doi.org/10.1007/S10489-020-01953-4

    Article  MATH  Google Scholar 

  30. Kirsch A, Amersfoort J, Gal, Y (2019) Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. In: Proceedings of the annual conference on neural information processing systems, Vancouver, BC, Canada, pp 7024–7035

  31. Benkert R, Prabhushankar M, AlRegib G, Pacharmi A, Corona E (2024) Gaussian switch sampling: a second-order approach to active learning. IEEE Trans Artif Intell 5(1):38–50. https://doi.org/10.1109/TAI.2023.3246959

    Article  Google Scholar 

  32. Ash JT, Zhang C, Krishnamurthy A, Langford J, Agarwal A (2020) Deep batch active learning by diverse, uncertain gradient lower bounds. In: Proceedings of the 8th international conference on learning representations. OpenReview.net, Addis Ababa, Ethiopia

  33. Jin Q, Yuan M, Qiao Q, Song Z (2022) One-shot active learning for image segmentation via contrastive learning and diversity-based sampling. Knowl Based Syst 241:108278. https://doi.org/10.1016/J.KNOSYS.2022.108278

    Article  MATH  Google Scholar 

  34. Citovsky G, DeSalvo G, Gentile C, Karydas L, Rajagopalan A, Rostamizadeh A, Kumar S (2021) Batch active learning at scale. In: Proceedings of the annual conference on neural information processing systems, virtual, pp 11933–11944

  35. Lin H, Li L (2012) Reduction from cost-sensitive ordinal ranking to weighted binary classification. Neural Comput 24(5):1329–1367. https://doi.org/10.1162/NECO_A_00265

    Article  MATH  Google Scholar 

  36. Huang G, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B 42(2):513–529. https://doi.org/10.1109/TSMCB.2011.2168604

  37. Xu J, Wang G, Deng W (2016) Denpehc: density peak based efficient hierarchical clustering. Inf Sci 373:200–218. https://doi.org/10.1016/J.INS.2016.08.086

    Article  MATH  Google Scholar 

  38. Wang M, Min F, Zhang Z, Wu Y (2017) Active learning through density clustering. Expert Syst Appl 85:305–317. https://doi.org/10.1016/J.ESWA.2017.05.046

    Article  MATH  Google Scholar 

  39. He D, Yu H, Wang G, Li J (2021) A two-stage clustering-based cold-start method for active learning. Intell Data Anal 25(5):1169–1185. https://doi.org/10.3233/IDA-205393

    Article  MATH  Google Scholar 

  40. Hager WW (1989) Updating the inverse of a matrix. SIAM Rev 31(2):221–239. https://doi.org/10.1137/1031049

    Article  MathSciNet  MATH  Google Scholar 

  41. Chen P, Lin H (2013) Active learning for multiclass cost-sensitive classification using probabilistic models. In: Proceedings of the conference on technologies and applications of artificial intelligence (TAAI), pp. 13–18. IEEE Computer Society, Taipei, China

  42. Schulz E, Speekenbrink M, Krause A (2018) A tutorial on gaussian process regression: modelling, exploring, and exploiting functions. J Math Psychol 85:1–16. https://doi.org/10.1016/j.jmp.2018.03.001

    Article  MathSciNet  MATH  Google Scholar 

  43. Jain AK, Nandakumar K, Ross A (2005) Score normalization in multimodal biometric systems. Pattern Recognit 38(12):2270–2285. https://doi.org/10.1016/J.PATCOG.2005.01.012

    Article  MATH  Google Scholar 

  44. Lichman M (2013) UCI machine learning repository. University of California, School of Information and Computer Science. https://archive.ics.uci.edu/datasets/

  45. Lin H.-T, Li L (2005) Novel distance-based svm kernels for infinite ensemble learning. In: Proceedings of the international conference on neural information processing, Taipei, China, pp 761–766

  46. Lin H, Li L (2008) Support vector machinery for infinite ensemble learning. J Mach Learn Res 9:285–312

    MATH  Google Scholar 

  47. Li L, Lin HT (2006) Ordinal regression by extended binary classification. In: Proceedings of the twentieth annual conference on neural information processing systems, pp 865–872. MIT Press, Vancouver, Canada

  48. Zhang T, Hao G, Lim M, Gu F, Wang X (2023) A deep hybrid transfer learning-based evolutionary algorithm and its application in the optimization of high-order problems. Soft Comput 27(14):9661–9672. https://doi.org/10.1007/S00500-023-08192-Y

    Article  MATH  Google Scholar 

  49. Pupo OGR, Altalhi AH, Ventura S (2018) Statistical comparisons of active learning strategies over multiple datasets. Knowl Based Syst 145:274–288. https://doi.org/10.1016/J.KNOSYS.2018.01.033

    Article  MATH  Google Scholar 

  50. Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics Bulletin 6:80–83. https://doi.org/10.2307/3001968

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by Scientific Research Fund Sponsored Project of Guangxi Minzu University under Grant NO.2024KJQD10.

Author information

Authors and Affiliations

Authors

Contributions

Deniu He: Conceptualization, Methodology, Software, Investigation, Data Curation, Validation, Writing - Original Draft, Visualization. Naveed Taimoor: Writing - Review & Editing.

Corresponding author

Correspondence to Deniu He.

Ethics declarations

Ethical Approval

Not applicable.

Consent for Publication

The author has approved the manuscript and agreed to its publication.

Competing interests

The authors declare that no known competing financial interests or personal relationships could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, D., Taimoor, N. Batch-mode active ordinal classification based on expected model output change and leadership tree. Appl Intell 55, 267 (2025). https://doi.org/10.1007/s10489-024-06152-z

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-06152-z

Keywords