Skip to main content
Log in

Similarity graph-based max-flow and duality approaches for semi-supervised data classification and image segmentation

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

The max-flow problem entails the computation of a maximum feasible flow from a source to a sink through a network under constraints. Its connection to total variation presents an opportunity to apply the problem to machine learning tasks by incorporating a similarity graph-based setting. In this paper, we integrate max-flow and duality techniques, similarity graph-based frameworks, semi-supervised procedures, class size information and class homogeneity terms to derive three algorithms for machine learning tasks, such as classification, and image segmentation. The first algorithm involves similarity graph-based max-flow incorporating supervised constraints and class size information. The second method involves a duality approach and global minimization of similarity graph-based total variation problems incorporating class size information. The third algorithm involves graph-based convex optimization via max-flow techniques for image segmentation problems involving region parameters, in the case the latter is unknown. An important advantage of the methods is that they require only a small set of labeled samples for good accuracy, in part due to the integration of graph-based and semi-supervised techniques; this is an important advantage due to the scarcity of labeled data. Moreover, some of the proposed algorithms are based on global minimization, and are also able to incorporate class size information, which often improves performance. In addition, the methods perform well on both large and small data sets, the latter of which can result in poor performances for learning methods due to a decreased ability to learn from observed data. The proposed methods are validated using benchmark data sets and are compared favorably to recent methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availibility statement

The links to all the data sets analyzed in this paper are included in this paper via citations, and the data is also available at the repository at https://github.com/kmerkurev/Data.

References

  1. 20 Newsgroups Data Set. http://qwone.com/~jason/20Newsgroups/

  2. Accelerate machine learning with active learning. https://becominghuman.ai/accelerate-machine-learning-with-active-learning-96cea4b72fdb

  3. Fashion MNIST Data Set. https://github.com/zalandoresearch/fashion-mnist

  4. LeNet-5 in 9 lines of code using Keras. https://medium.com/@mgazar/lenet-5-in-9-lines-of-code-using-keras-ac99294c8086

  5. LIBSVM – A Library for Support Vector Machines. https://www.csie.ntu.edu.tw/~cjlin/libsvm/

  6. Optical Recognition of Handwritten Digits Data Set. https://archive.ics.uci.edu/ml/datasets/optical+recognition+of+handwritten+digits

  7. Pen-Based Recognition of Handwritten Digits Data Set. https://archive.ics.uci.edu/ml/datasets/Pen-Based+Recognition+of+Handwritten+Digits

  8. Quick introduction to bag-of-words (bow) and tf-idf for creating features from text. https://www.analyticsvidhya.com/blog/2020/02/quick-introduction-bag-of-words-bow-tf-idf/

  9. Reuters Data Set. https://www.cs.umb.edu/~smimarog/textmining/datasets/

  10. Statlog Data Set. https://archive.ics.uci.edu/ml/datasets/Statlog+(Landsat+Satellite)

  11. Text classification with word2vec. http://nadbordrozd.github.io/blog/2016/05/20/text-classification-with-word2vec/

  12. VLFeat Library. https://www.vlfeat.org

  13. Abu-El-Haija S, Kapoor A, Perozzi B, Lee J (2020) N-GCN: Multi-scale graph convolution for semi-supervised node classification. Uncertain Artif Intell 115:841–851

    Google Scholar 

  14. Ahuja RK, Orlin JB, Tarjan RE (1989) Improved time bounds for the maximum flow problem. SIAM J Comput 18(5):939–954

    Article  MathSciNet  MATH  Google Scholar 

  15. Bae E, Merkurjev E (2017) Convex variational methods on graphs for multiclass segmentation of high-dimensional data and point clouds. J Math Imaging Vis 58(3):468–493

    Article  MathSciNet  MATH  Google Scholar 

  16. Bae E, Tai X-C (2009) Efficient global minimization for the multiphase Chan-Vese model of image segmentation. 5681:28–41

  17. Bae E, Tai XC, Yuan J (2014) Maximizing flows with message-passing: Computing spatially continuous min-cuts. In Energy Minimization Methods in Computer Vision and Pattern Recognition - 10th International Conference, Hong Kong, China, January 13-16, 2015. Proceedings, pages 15–28

  18. Bae E, Yuan J, Tai X-C (2011) Global minimization for continuous multiphase partitioning problems using a dual approach. Int J Comput Vision 92(1):112–129

    Article  MathSciNet  MATH  Google Scholar 

  19. Bae E, Yuan J, Tai XC (2013) Simultaneous convex optimization of regions and region parameters in image segmentation models. Innov Shape Anal. https://doi.org/10.1007/978-3-642-34141-0_19

    Article  MATH  Google Scholar 

  20. Bae E, Yuan J, Tai XC, Boykov Y (2014) A fast continuous max-flow approach to non-convex multi-labeling problems. In Efficient Algorithms for Global Optimization Methods in Computer Vision, pages 134–154

  21. Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434

    MathSciNet  MATH  Google Scholar 

  22. Belongie S, Fowlkes C, Chung F, Malik J (2002) Spectral partitioning with indefinite kernels using the Nyström extension. In European Conference on Computer Vision, pages 531–542

  23. Bertozzi AL, van Gennip Y (2012) Gamma-convergence of graph Ginzburg-Landau functionals. Adv Differ Equ 17(11–12):1115–1180

    MATH  Google Scholar 

  24. Boykov Y, Kolmogorov V (2001) An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans Pattern Anal Mach Intell 26:359–374

    MATH  Google Scholar 

  25. Braga P, Medeiros HR, Bassani HF (2020) Deep categorization with semi-supervised self-organizing maps. In International Joint Conference on Neural Networks, pages 1–7

  26. Braga PHM, Bassani HF (2018) A semi-supervised self-organizing map for clustering and classification. In International Joint Conference on Neural Networks, pages 1–8,

  27. Bruna J, Zaremba W, Szlam A, LeCun Y (2014) Spectral networks and locally connected networks on graphs. International Conference on Learning Representation

  28. Cang Z, Mu L, Wei G-W (2018) Representability of algebraic topology for biomolecules in machine learning based scoring and virtual screening. PLoS Comput Biol 14(1):e1005929

    Article  Google Scholar 

  29. Cardoso A (2007) Datasets for single-label text categorization. http://web.ist.utl.pt/~acardoso/datasets/

  30. Chambolle A (2004) An algorithm for total variation minimization and applications. J Math Imaging Vis 20(1):89–97

    MathSciNet  MATH  Google Scholar 

  31. Chambolle A, Darbon J (2009) On total variation minimization and surface evolution using parametric maximum flows. Int J Comput Vision 84(3):288

    Article  MATH  Google Scholar 

  32. Chandran BG, Hochbaum DS (2009) A computational study of the pseudoflow and push-relabel algorithms for the maximum flow problem. Oper Res 57(2):358–376

    Article  MathSciNet  MATH  Google Scholar 

  33. Chapelle O, Zien A (2005) Semi-supervised classification by low density separation. Int Conf Artif Intell Stat 2005:57–64

    Google Scholar 

  34. Chen Y, Kuo CJ (2020) PixelHop: A successive subspace learning (SSL) method for object recognition. J Vis Commun Image Represent 70:102749

    Article  Google Scholar 

  35. Chen Y, Yang Y, Zhang M, Kuo CCJ (2019) Semi-supervised learning via feedforward-designed convolutional neural networks. In IEEE International Conference on Image Processing, pages 365–369. IEEE, 2019

  36. Cherkassky BV, Goldberg AV (1997) On implementing the push-relabel method for the maximum flow problem. Algorithmica 19(4):390–410

    Article  MathSciNet  MATH  Google Scholar 

  37. Christiano P, Kelner JA, Madry A, Spielman DA, Teng SH (2011) Electrical flows, Laplacian systems, and faster approximation of maximum flow in undirected graphs. In Forty-Third Annual ACM Symposium on Theory of Computing, pages 273–282

  38. Combettes PL, Wajs VR (2005) Signal recovery by proximal forward-backward splitting. Multiscale Model Simul 4(4):1168–1200

    Article  MathSciNet  MATH  Google Scholar 

  39. Couprie C, Grady L, Talbot H, Najman L (2011) Combinatorial continuous maximum flow. SIAM J Imag Sci 4(3):905–930

    Article  MathSciNet  MATH  Google Scholar 

  40. Coutinho FP (2019) Construção Automática de Funções de Proximidade para Redes de Termos usando Evolução Gramatical. PhD thesis, Universidade de São Paulo

  41. Craven M, DiPasquo D, Freitag D, McCallum A, Mitchell T, Nigam K, Slattery S (1998) Learning to extract symbolic knowledge from the world wide web. In Fifteenth National Conference on Artificial Intelligence, pages 509–516. AAAI Press

  42. Dantzig G, Fulkerson DR (2003) On the max flow min cut theorem of networks. Linear Inequal Relat Syst 38:225–231

    Google Scholar 

  43. de Lima BVA, Neto ADD, Silva LEM, Machado VP, Costa JGC (2019) Semi-supervised classification using deep learning. In Brazilian Conference on Intelligent Systems, pages 717–722. IEEE

  44. Duchi J, Shalev-Shwartz S, Singer Y, Chandra T (2008) Efficient projections onto the \(l1\)-ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine learning, pages 272–279

  45. Ekeland I, Téman R (1999) Convex Anal Var Probl. Society for industrial and applied mathematics, Philadelphia, PA, USA

    Book  Google Scholar 

  46. Elmoataz A, Lezoray O, Bougleux S (2008) Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing. IEEE Trans Image Process 17:1047–1060

    Article  MathSciNet  MATH  Google Scholar 

  47. Esser JE (2010) Primal dual algorithms for convex models and applications to image restoration, registration and nonlocal inpainting. UCLA

  48. Fowlkes C, Belongie S, Chung F, Malik J (2004) Spectral grouping using the Nyström method. IEEE Trans Pattern Anal Mach Intell 26(2):214–225

    Article  Google Scholar 

  49. Fowlkes C, Belongie S, Malik J (2001) Efficient spatiotemporal grouping using the Nyström method. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, volume 1, pages I–I. IEEE

  50. Gadde A, Anis A, Ortega A (2014) Active semi-supervised learning using sampling theory for graph signals. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 492–501

  51. Gallo G, Grigoriadis MD, Tarjan RE (1989) A fast parametric maximum flow algorithm and applications. SIAM J Comput 18(1):30–55

    Article  MathSciNet  MATH  Google Scholar 

  52. Goldberg AB, Zhu X, Wright S (2007) Dissimilarity in graph-based semi-supervised classification. In Artificial Intelligence and Statistics, pages 155–162

  53. Goldberg AV, Tarjan RE (May 1987) Solving minimum cost flow problems by successive approximation. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing

  54. Goldberg AV, Tarjan RE (1988) A new approach to the maximum-flow problem. J ACM 35(4):921–940

    Article  MathSciNet  MATH  Google Scholar 

  55. Goldberg AV, Tarjan RE (2014) Efficient maximum flow algorithms. Commun ACM 57(8):82–89

    Article  Google Scholar 

  56. Goldfarb D, Yin W (2009) Parametric maximum flow algorithms for fast total variation minimization. SIAM J Sci Comput 31(5):3712–3743

    Article  MathSciNet  MATH  Google Scholar 

  57. Goldstein T, Bresson X, Osher S (2012) Global minimization of Markov random fields with applications to optical flow. Inverse Probl Imaging 6(4):623

    Article  MathSciNet  MATH  Google Scholar 

  58. Gong C, Tao D, Maybank SJ, Liu W, Kang G, Yang J (2016) Multi-modal curriculum learning for semi-supervised image classification. IEEE Trans Image Process 25(7):3249–3260

    Article  MathSciNet  MATH  Google Scholar 

  59. Hamilton W, Ying Z, Leskovec J (2017) Inductive representation learning on large graphs. Advances in Neural Information Processing Systems, 30

  60. Han S, Peng Z, Wang S (2014) The maximum flow problem of uncertain network. Inf Sci 265:167–175

    Article  MathSciNet  MATH  Google Scholar 

  61. Harris TE, Ross FS (1955) Fundamentals of a method for evaluating rail net capacities. Research Memorandum, RM-1573

  62. Hochbaum DS (2008) The pseudoflow algorithm: a new algorithm for the maximum-flow problem. Oper Res 56(4):992–1009

    Article  MathSciNet  MATH  Google Scholar 

  63. Iscen A, Tolias G, Avrithis Y, Chum O (2019) Label propagation for deep semi-supervised learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5070–5079

  64. Itai A, Perl Y, Shiloach Y (1982) The complexity of finding maximum disjoint paths with length constraints. Networks 12(3):277–286

    Article  MathSciNet  MATH  Google Scholar 

  65. Italiano GF, Nussbaum Y, Sankowski P, Wulff-Nilsen C (2011) Improved algorithms for min cut and max flow in undirected planar graphs. In Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing, pages 313–322

  66. Jia L, Zhang Z, Wang L, Jiang W, Zhao M (2016) Adaptive neighborhood propagation by joint \(l2\), 1-norm regularized sparse coding for representation and classification. In IEEE 16th International Conference on Data Mining, pages 201–210. IEEE

  67. Jung A, Hero AO III, Mara AC, Jahromi S, Heimowitz A, Eldar YC (2019) Semi-supervised learning in network-structured data via total variation minimization. IEEE Trans Signal Process 67(24):6256–6269

    Article  MathSciNet  MATH  Google Scholar 

  68. Kapoor A, Ahn H, Qi Y, Picard RW (2006) Hyperparameter and kernel learning for graph based semi-supervised classification. Adv Neural Inform Process Syst 18: 627–634

  69. Kelner JA, Lee YT, Orecchia L, Sidford A (2014) An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. In Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 217–226

  70. Kilimci ZH, Akyokus S, Omurca SI (2016) The effectiveness of homogenous ensemble classifiers for Turkish and English texts. In 2016 International Symposium on Innovations in Intelligent Systems and Applications, pages 1–7

  71. Kim T, Hwang I, Kang GC, Choi WS, Kim H, Zhang BT (2020) Label propagation adaptive resonance theory for semi-supervised continuous learning. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4012–4016. IEEE

  72. Kingma DP, Mohamed S, Rezende DJ, Welling M (2014) Semi-supervised learning with deep generative models. Adv Neural Inform Process Syst 27:3581–3589

    Google Scholar 

  73. Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations

  74. Kiwiel KC (1995) Proximal minimization methods with generalized bregman functions. SIAM J Control Optim 35:1142–1168

    Article  MathSciNet  MATH  Google Scholar 

  75. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  76. LeCun Y, Cortes C. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/

  77. Lee YT, Sidford A (2014) Path finding methods for linear programming: Solving linear programs in \({\tilde{O}}\)(\(\sqrt{rank}\)) iterations and faster algorithms for maximum flow. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 424–433. IEEE

  78. Lellmann J, Kappes J, Yuan J, Becker F, Schnörr C (2009) Convex multi-class image labeling by simplex-constrained total variation. Scale Space Var Methods Comput Vis 5567:150–162

    Article  Google Scholar 

  79. Lesort T, Caselles-Dupré H, Garcia-Ortiz M, Stoian A, Filliat D (2019) Generative models from the perspective of continual learning. In International Joint Conference on Neural Networks, pages 1–8

  80. Levatić J, Ceci M, Kocev D, Džeroski S (2017) Semi-supervised classification trees. J Intell Inform Syst 49(3):461–486

    Article  Google Scholar 

  81. Li J, Zhu Q, Wu Q, Cheng D (2020) An effective framework based on local cores for self-labeled semi-supervised classification. Knowl-Based Syst 197:105804

    Article  Google Scholar 

  82. Li Q, Han Z, Wu XM (2018) Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI Conference on Artificial Intelligence

  83. Li X, Yin H, Zhou K, Zhou X (2020) Semi-supervised clustering with deep metric learning and graph embedding. World Wide Web 23(2):781–798

    Article  Google Scholar 

  84. Liao R, Brockschmidt M, Tarlow D, Gaunt A, Urtasun R, Zemel RS (2018) Graph partition neural networks for semi-supervised classification. International Conference on Learning Representations

  85. Lin F, Cohen WW (2010) Semi-supervised classification of network data using very few labels. In 2010 International Conference on Advances in Social Networks Analysis and Mining, pages 192–199. IEEE

  86. Liu J, Ye J (2009) Efficient euclidean projections in linear time. In Proceedings of the 26th International Conference on Machine Learning, pages 657–664

  87. Mai X, Couillet R (2018) A random matrix analysis and improvement of semi-supervised learning for large dimensional data. J Mach Learn Res 19:1–27

    MathSciNet  MATH  Google Scholar 

  88. Martin D, Fowlkes C, Tal D, Malik J (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. Proceed Int Conf Comput Visio 2:416–423

    Article  Google Scholar 

  89. Melacci S, Belkin M (2011) Laplacian support vector machines trained in the primal. J Mach Learn Res 12(3):1149–1184

    MathSciNet  MATH  Google Scholar 

  90. Merkurjev E, Bae E, Bertozzi AL, Tai X-C (2015) Global binary optimization on graphs for classification of high-dimensional data. J Math Imaging Vis 52(3):414–435

    Article  MathSciNet  MATH  Google Scholar 

  91. Nie F, Cai G, Li X (2017) Multi-view clustering and semi-supervised classification with adaptive neighbours. In Thirty-First AAAI Conference on Artificial Intelligence

  92. Nie F, Li J, Li X (2016) Parameter-free auto-weighted multiple graph learning: A framework for multiview clustering and semi-supervised classification. In International Joint Conference on Artificial Intelligence, pages 1881–1887

  93. Oghbaie M, Zanjireh MM (2018) Pairwise document similarity measure based on present term set. J Big Data 5(1):52

    Article  Google Scholar 

  94. Orlin JB (2013) Max flows in \({O}(nm)\) time, or better. In Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, pages 765–774

  95. Perona P, Zelnik-Manor L (2004) Self-tuning spectral clustering. Adv Neural Inf Process Syst 17:1601–1608

    Google Scholar 

  96. Piroonsup N, Sinthupinyo S (2018) Analysis of training data using clustering to improve semi-supervised self-training. Knowl-Based Syst 143:65–80

    Article  Google Scholar 

  97. Potts RB (1952) Some generalized order-disorder transformations. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 48, pages 106–109. Cambridge University Press

  98. Qi Z, Tian Y, Shi Y (2012) Laplacian twin support vector machine for semi-supervised classification. Neural Netw 35:46–53

    Article  MATH  Google Scholar 

  99. Qu M, Bengio Y, Tang J (2019) GMNN: Graph Markov neural networks. International Conference on Machine Learning, pages 5241–5250

  100. Ren Y, Hu K, Dai X, Pan L, Hoi SC, Xu Z (2019) Semi-supervised deep embedded clustering. Neurocomputing 325:121–130

    Article  Google Scholar 

  101. Rios MF, Calder J, Lerman G (2022) Analysis and algorithms for \(l_p\)-based semi-supervised learning on graphs. Appl Comput Harmon Anal 60:77–122

    Article  MathSciNet  MATH  Google Scholar 

  102. Rockafellar RT (1970) Convex Analysis. Number 28. Princeton University Press

  103. Rossi RG, Rezende SO, de Andrade Lopes A (2015) Term network approach for transductive classification. International Conference on Intelligent Text Processing and Computational Linguistics, pages 497–515

  104. Roy S, Cox IJ (1998) A maximum-flow formulation of the \(n\)-camera stereo correspondence problem. In IEEE Proceedings of International Conference on Computer Vision, pages 492–499

  105. Saleh AI, Al Rahmawy MF, Abulwafa AE (2017) A semantic based web page classification strategy using multi-layered domain ontology. World Wide Web 20(5):939–993

    Article  Google Scholar 

  106. Saleh AI, El Desouky AI, Ali SH (2015) Promoting the performance of vertical recommendation systems by applying new classification techniques. Knowl-Based Syst 75:192–223

    Article  Google Scholar 

  107. Schrijver A (2002) On the history of the transportation and maximum flow problems. Math Program 91(3):437–445

    Article  MathSciNet  MATH  Google Scholar 

  108. Shi Z, Osher S, Zhu W (2017) Weighted nonlocal Laplacian on interpolation from sparse data. J Sci Comput 73(2):1164–1177

    Article  MathSciNet  MATH  Google Scholar 

  109. Shui C, Zhou F, Gagné C, Wang B (2020) Deep active learning: Unified and principled method for query and training. In International Conference on Artificial Intelligence and Statistics, pages 1308–1318

  110. Sindhwani V, Niyogi P, Belkin M (2005) Beyond the point cloud: from transductive to semi-supervised learning. In Proceedings of the 22nd International Conference on Machine Learning, pages 824–831

  111. Sion M (1958) On general minimax theorems. Pac J Math 8:171–176

    Article  MathSciNet  MATH  Google Scholar 

  112. Souza RM, Breve F (2015) Parallelization of the particle competition and cooperation approach for semi-supervised learning. In Workshop de Visão Computacional, pages 402–406

  113. Strang G (2008) Maximum flows and minimum cuts in the plane. Adv Mech Math III:1–11

    MATH  Google Scholar 

  114. Subramanya A, Bilmes J (2011) Semi-supervised learning with measure propagation. J Mach Learn Res 12:3311–3370

    MathSciNet  MATH  Google Scholar 

  115. Teboulle M (2007) A unified continuous optimization framework for center-based clustering methods. J Mach Learn Res 8:65–102

    MathSciNet  MATH  Google Scholar 

  116. Thekumparampil KK, Wang C, Oh S, Li LJ (2018) Attention-based graph neural network for semi-supervised learning. arXiv preprint arXiv:1803.03735

  117. Wang B, Tu Z, Tsotsos JK (2013) Dynamic label propagation for semi-supervised multi-class multi-label classification. In Proceedings of the IEEE International Conference on Computer Vision, pages 425–432

  118. Wang J, Jebara T, Chang SF (2013) Semi-supervised learning using greedy max-cut. J Mach Learn Res 14:771–800

  119. Wang M, Fu W, Hao S, Tao D, Wu X (2016) Scalable semi-supervised learning by efficient anchor graph regularization. IEEE Trans Knowl Data Eng 28(7):1864–1877

    Article  Google Scholar 

  120. Wang Z, Wang L, Chan R, Zeng T (2019) Large-scale semi-supervised learning via graph structure learning over high-dense points. arXiv preprint arXiv:1912.02233

  121. Weston J, Ratle F, Mobahi H, Collobert R (2012) Deep learning via semi-supervised embedding. In Neural networks: Tricks of the trade, pages 639–655

  122. Yang W, Cohen Z, Salakhudinov R (2016) Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning, pages 40–48

  123. Yang Z, Cohen W, Salakhudinov R (2016) Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning, pages 40–48

  124. Yin K, Tai X-C (2018) An effective region force for some variational models for learning and clustering. J Sci Comput 74(1):175–196

    Article  MathSciNet  MATH  Google Scholar 

  125. Yu G, Zhang G, Domeniconi C, Yu Z, You J (2012) Semi-supervised classification based on random subspace dimensionality reduction. Pattern Recogn 45(3):1119–1135

    Article  MATH  Google Scholar 

  126. Yuan J, Bae E, Tai XC (2010) A study on continuous max-flow and min-cut approaches. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2217–2224

  127. Yuan J, Bae E, Tai XC, Boykov Y (2010) A continuous max-flow approach to Potts model. Eur Conf Comput Vis 6316:379–392

    Google Scholar 

  128. Yuan J, Bae E, Tai X-C, Boykov Y (2013) A spatially continuous max-flow and min-cut framework for binary labeling problems. Numer Math 126(3):559–587

    Article  MathSciNet  MATH  Google Scholar 

  129. Zach C, Gallup D, Frahm JM, Niethammer M (2008) Fast global labeling for real-time stereo using multiple plane sweeps. Vis Model Vis Workshop 6(7):243–252

    Google Scholar 

  130. Zhang Y, Pal S, Coates M, Ustebay D (2019) Bayesian graph convolutional neural networks for semi-supervised classification. Proceed AAAI Conf Artif Intelli 33(01):5829–5836

    Google Scholar 

  131. Zhang Z, Jia L, Zhao M, Liu G, Wang M, Yan S (2018) Kernel-induced label propagation by mapping for semi-supervised classification. IEEE Trans Big Data 5(2):148–165

    Article  Google Scholar 

  132. Zhang Z, Li F, Jia L, Qin J, Zhang L, Yan S (2017) Robust adaptive embedded label propagation with weight learning for inductive classification. IEEE Trans Neural Netw Learn Syst 29(8):3388–3403

    Article  MathSciNet  Google Scholar 

  133. Zhang Z, Zhang Y, Li F, Zhao M, Zhang L, Yan S (2017) Discriminative sparse flexible manifold embedding with novel graph for robust visual representation and label propagation. Pattern Recogn 61:492–510

    Article  MATH  Google Scholar 

  134. Zhu X, Ghahramani Z (2002) Learning from labeled and unlabeled data with label propagation. Technical report, Carnegie Mellon University

  135. Zhuang C, Ma Q (2018) Dual graph convolutional networks for graph-based semi-supervised classification. In Proceedings of the 2018 World Wide Web Conference, pages 499–508

Download references

Funding

This work is supported in part by NSF grant DMS-2052983.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ekaterina Merkurjev.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Merkurjev, E. Similarity graph-based max-flow and duality approaches for semi-supervised data classification and image segmentation. Int. J. Mach. Learn. & Cyber. 14, 4285–4310 (2023). https://doi.org/10.1007/s13042-023-01894-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01894-7

Keywords

Navigation