Skip to main content
Log in

Maximum density minimum redundancy based hypergraph regularized support vector regression

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Semi-supervised learning has attracted great attention in machine learning for it makes full use of labeled and unlabeled data for training. Most semi-supervised learning methods are not suitable for regression due to the data labels in regression are real-valued and smooth. In this paper, hypergraph instead of graph is utilized to represent the geometric structure of data. The manifold regularization term is constructed by calculating the hypergraph Laplacian and introduced into the regularization framework of kernel learning, a hypergraph regularized support vector regression (HGSVR) is proposed. Moreover, we propose a two-layer maximum density minimum redundancy method (MDMR) to pre-select initial labeled data, which fully considers the density and redundancy of data. The pre-select method is introduced into HGSVR and a second semi-supervised regression called MDMR-HGSVR is proposed. Experimental results on 9 UCI datasets show that HGSVR and MDMR-HGSVR outperform the other compared semi-supervised regression methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data availability

Some or all data, models, or code generated or used during the study are available from the corresponding author by request (Yuting Sun).

References

  1. Vapnik V (1995) The nature of statistical learning theory. Springer, New York

    Book  MATH  Google Scholar 

  2. Vapnik V (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  3. Peng X, Chen D (2019) An l1-norm loss based twin support vector regression and its geometric extension. Int J Mach Learn Cybern 10:2573–2588

    Article  Google Scholar 

  4. Angayarkanni S, Sivakumar R, Rao Y (2021) Hybrid grey wolf: bald eagle search optimized support vector regression for traffic flow forecasting. J Ambient Intell Humaniz Comput 12(1):1293–1304

    Article  Google Scholar 

  5. Li W, Shi Q, Sbtain M, Li D, Mbanze D (2020) A hybrid forecasting model for short-term power load based on sample entropy, two-phase decomposition and whale algorithm optimized support vector regression. IEEE Access 8:166907–166921

    Article  Google Scholar 

  6. Huang Z, Yang C, Zhou X, Yang S (2020) Energy consumption forecasting for the nonferrous metallurgy industry using hybrid support vector regression with an adaptive state transition algorithm. Cogn Comput 12(2):357–368

    Article  Google Scholar 

  7. Yang Z, Yi X, Zhu A (2020) A mixed model based on wavelet transform and support vector regression to forecast stock price. In: Proceedings of 2020 IEEE international conference on artificial intelligence and computer applications, pp. 420–426

  8. Zhang J, Teng Y, Chen W (2019) Support vector regression with modified firefly algorithm for stock price forecasting. Appl Intell 49(5):1658–1674

    Article  Google Scholar 

  9. Zhu X, Goldberg AB (2009) Introduction to semi-supervised learning. Morgan & Claypool, New York

    Book  MATH  Google Scholar 

  10. Jia Y, Kwong S, Hou J, Wu W (2020) Semi-supervised non-negative matrix factorization with dissimilarity and similarity regularization. IEEE Trans Neural Netw Learn Syst 31(8):2510–2521

    MathSciNet  Google Scholar 

  11. Gong T, Chen H, Xu C (2021) Learning performance of LapSVM based on Markov subsampling. Neurocomputing 432:10–20

    Article  Google Scholar 

  12. Hu R, Zhang L, Wei J (2021) Adaptive Laplacian support vector machine for semi-supervised learning. Comput J 64(7):1005–1015

    Article  MathSciNet  Google Scholar 

  13. Li Y, Wang Y, Bi C, Jiang X (2018) Revisiting transductive support vector machines with margin distribution embedding. Knowl-Based Syst 152:200–214

    Article  Google Scholar 

  14. Zhou Z, Li M (2007) Semisupervised regression with containing-style algorithms. IEEE Trans Knowl Data Eng 19(11):1479–1493

    Article  Google Scholar 

  15. Peng J, Estrada G, Pedersoli M, Desrosier C (2020) Deep co-training for semi-supervised image segmentation. Pattern Recogniz 107:107269

    Article  Google Scholar 

  16. Wang P, Peng J, Pedersoli M, Zhou Y, Zhang C, Desrosier C (2021) Self-paced and self-consistent co-training for semi-supervised image segmentation. Med Image Anal 73:102146

    Article  Google Scholar 

  17. Li Y, Zha H, Zhou Z (2017) Learning safe prediction for semi-supervised regression. In: Proceedings of the thirty-first AAAI conference on artificial intelligence, pp. 2217–2223

  18. Ziraki N, Dornaika F, Bosaghzadeh A (2022) Multiple-view flexible semi-supervised classification through consistent graph construction and label propagation. Neural Netw 146:174–180

    Article  Google Scholar 

  19. Qing Y, Zeng Y, Huang G (2021) Label propagation via local geometry preserving for deep semi-supervised image recognition. Neural Netw 143:303–313

    Article  Google Scholar 

  20. Timilsina M, Figueroa A, d’Aquin M, Yang H (2021) Semi-supervised regression using diffusion on graphs. Appl Soft Comput J 104:107188

    Article  Google Scholar 

  21. Huang S, Liu Z, Jin W, Mu Y (2021) Broad learning system with manifold regularized sparse features for semi-supervised classification. Neurocomputing 463:133–143

    Article  Google Scholar 

  22. Lv S, Shi S, Wang H, Li F (2021) Semi-supervised multi-label feature selection with adaptive structure learning and manifold learning. Knowl-Based Syst 214:106757

    Article  Google Scholar 

  23. Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434

    MathSciNet  MATH  Google Scholar 

  24. Yoo J, Kim H (2014) Semisupervised location awareness in wireless sensor networks using Laplacian support vector regression. Int J Distrib Sens Netw 2014(1):1–7

    MathSciNet  Google Scholar 

  25. Berge C (1973) Graph and hypergraph, Amsterdam. North-Holland Publishing Company, Holland

    Google Scholar 

  26. Wu W, Kwong S, Zhou Y, Jia Y, Gao W (2018) Nonnegative matrix factorization with mixed hypergraph regularization for community detection. Inform Sci 435:263–281

    Article  MathSciNet  MATH  Google Scholar 

  27. Zhang S, Cui S, Ding Z (2020) Hypergraph-based image processing. In: Proceeding of IEEE international conference on image processing (ICIP), pp. 216–220

  28. Luo F, Guo T, Lin Z (2020) Semisupervised hypergraph discriminant learning for dimensionality reduction of hyperspectral image. IEEE J Sel Top Appl Earth Observ Remote Sens 13:4242–4256

    Article  Google Scholar 

  29. Tang C, Liu X, Wang P, Zhang C (2019) Adaptive hypergraph embedded semi-supervised multi-label image annotation. IEEE Trans Multimedia 21(11):2837–2849

    Article  Google Scholar 

  30. Lierde H, Chow T (2019) Learning with fuzzy hypergraphs: a topical approach to query-oriented text summarization. Inf Sci 496:212–224

    Article  MATH  Google Scholar 

  31. Lierde H, Chow T (2019) Query-oriented text summarization based on hypergraph transversals. Inf Process Manage 56(4):1317–1338

    Article  Google Scholar 

  32. Zhu J, Zhu J, Ghosh S, Wu W, Yuan J (2019) Social influence maximization in hypergraph in social networks. IEEE Trans Netw Sci Eng 6:801–811

    Article  MathSciNet  Google Scholar 

  33. Zhao W, Tao S, Guan Z, Zhang B, Gong M, Cao Z, Wang Q (2018) Learning to map social network users by unified manifold alignment on hypergraph. IEEE Trans Netw Sci Eng 29(12):5834–5846

    MathSciNet  Google Scholar 

  34. Raman M, Somu N, Kirthivasan K (2017) A hypergraph and arithmetic residue-based probabilistic neural network for classification in intrusion detection systems. Neural Netw 92:89–97

    Article  Google Scholar 

  35. Wang Y, Chen S, Zhou Z (2012) New semi-supervised classification method based on modified cluster assumption. IEEE Trans Neural Netw Learn Syst 23(5):689–702

    Article  Google Scholar 

  36. Huang S, Elhoseiny M, Elgammal A, Yang D (2015), Learning hypergraph-regularized attribute predictors. In: Proceedings of: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp. 409–417

  37. Zhou D, Huang J, Schölkopf B (2006) Learning with hypergraphs: clustering, classification, and embedding. In: Proceedings of international conference on neural information processing systems, pp 1601–1608

  38. Gu Y, Jin Z, Chiu S (2014) Active Learning with maximum density and minimum redundancy. In: Proceedings of 21st international conference on neural information processing (ICONIP), pp 103–110

  39. Yang H, King I, Lyu MR (2007) DiffusionRank: a possible penicillin for web spamming. In: Proceedings of the 30th annual international ACM SIGIR conference on research and development in information retrieval (SIGIR07), pp 431–438

  40. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7(1):1–30

    MathSciNet  MATH  Google Scholar 

  41. Iman RL, Davenport JM (1980) Approximations of the critical region of the fbietkan statistic. Commun Stat Theory Methods 9(6):571–595

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (nos. 61976216, 62276265).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yuting Sun or Jian Zhang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, S., Sun, Y., Zhang, J. et al. Maximum density minimum redundancy based hypergraph regularized support vector regression. Int. J. Mach. Learn. & Cyber. 14, 1933–1950 (2023). https://doi.org/10.1007/s13042-022-01738-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-022-01738-w

Keywords

Navigation