Skip to main content
Log in

Robust Multi-view Classification with Sample Constraints

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper proposes a new multi-view classification method by taking three sample constraints into account to automatically assign large weights to important samples and small weights to unimportant samples. To do this, we first demonstrate that different samples have different contributions to the classification models, and then propose to consider sample weight, class weight, and view weight, to overcome the influence of different levels of noise. Specifically, the sample weight for every data point is obtained by penalizing an \(\ell _{2,1}\)-norm loss on its estimation error to reduce the influence of the sample-level noise, the class weight for each class is obtained by considering the misclassification cost as well as imbalance class to overcome the influence of class-level noise, and the view weight for each view is obtained by penalizing a squared root operator on the estimation error of each view to reduce the influence of view-level noise. In particular, our proposed sample constraints can be easily embedded in previous multi-view learning models. Experimental results on simulated and real data sets showed that our proposed method was superior to the state-of-the-art classification methods in terms of classification performance of cost-sensitive learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. http://archive.ics.uci.edu/ml/.

  2. http://www.escience.cn/people/fpnie/papers.html.

  3. http://mlg.ucd.ie/datasets/bbc.html.

References

  1. Wang H, Nie F, Huang H (2013) “Multi-view clustering and feature learning via structured sparsity,” In: International conference on machine learning.PMLR, pp. 352–360

  2. Cai X, Nie F, Huang H (2013) “Multi-view k-means clustering on big data,” In: IJCAI, pp. 2598–2604

  3. Yang M, Deng C, Nie F (2018) Adaptive-weighting discriminative regression for multi-view classification. Pattern Recogn 88(2019):236–245

    Google Scholar 

  4. Shen HT, Liu L, Yang Y, Xu X, Huang Z, Shen F, Hong R (2020) Exploiting subspace relation in semantic labels for cross-modal hashing. IEEE Trans Knowledge Data Eng. https://doi.org/10.1109/TKDE.2020.2970050

    Article  Google Scholar 

  5. Zhu X, Yang J, Zhang C, Zhang S (2019) Efficient utilization of missing data in cost-sensitive learning. IEEE Trans Knowledge Data Eng. https://doi.org/10.1109/TKDE.2019.2956530

    Article  Google Scholar 

  6. Zhu X, Gan J, Lu G, Li J, Zhang S (2019) “Spectral clustering via half-quadratic optimization,” World Wide Web, pp. 1–20

  7. Hu R, Zhu X, Zhu Y, Gan J (2020) Robust svm with adaptive graph learning. World Wide Web 23(3):1945–1968

    Article  Google Scholar 

  8. Zha Z, Yang Y, Tang J, Wang M, Chua T (2015) Robust multiview feature learning for RGB-D image understanding. ACM TIST 6(2):15:1-15:19

    Google Scholar 

  9. Xu X, Lu H, Song J, Yang Y, Shen HT, Li X (2019) “Ternary adversarial networks with self-supervision for zero-shot cross-modal retrieval,”

  10. Liu H, Mao H, Fu Y (2016) “Robust multi-view feature selection,” In: ICDM, pp. 281–290

  11. Khan SH, Hayat M, Bennamoun M, Sohel FA, Togneri R (2015) Cost-sensitive learning of deep feature representations from imbalanced data. IEEE Trans Neural Netw Learn Syst 29(8):3573–3587

    Google Scholar 

  12. Thai-Nghe N, Gantner Z, Schmidt-Thieme L (2010) “Cost-sensitive learning methods for imbalanced data,” In: IJCNN, pp. 1–8

  13. Zhang Z, Liu L, Shen F, Shen HT, Shao L (2019) Binary multi-view clustering. IEEE Trans Pattern Anal Mach Intell 41(7):1774–1782

    Article  Google Scholar 

  14. Shen HT, Zhu X, Zhang Z, Wang S-H, Chen Y, Xu X, Shao J (2021) Heterogeneous data fusion for predicting mild cognitive impairment conversion. Inf Fus 66:54–63

    Article  Google Scholar 

  15. Nie F, Tian L, Li X (2018) “Multiview clustering via adaptively weighted procrustes,” In: KDD, pp. 2022–2030

  16. Ding Z, Fu Y (2016) “Robust multi-view subspace learning through dual low-rank decompositions,” In: AAAI, pp. 1181–1187

  17. Zhang C, Tan KC, Li H, Hong GS (2019) A cost-sensitive deep belief network for imbalanced classification. IEEE Trans Neural Netw Learn Syst 30(1):109–122

    Article  Google Scholar 

  18. Zhu X, Zhang S, Zhu Y, Zhu P, Gao Y (2020) Unsupervised spectral feature selection with dynamic hyper-graph learning. IEEE Trans Knowl Data Eng. https://doi.org/10.1109/TKDE.2020.3017250

    Article  Google Scholar 

  19. Lapedriza A, Pirsiavash H, Bylinskii Z, Torralba A (2013) “Are all training examples equally valuable?”arXiv:1311.6510arXiv preprint

  20. Jiang L, Meng D, Yu S-I, Lan Z, Shan S, Hauptmann A (2014) “Self-paced learning with diversity,” In: Advances in Neural Information Processing Systems, pp. 2078–2086

  21. Zhu X, Li X, Zhang S, Xu Z, Yu L, Wang C (2017) Graph PCA hashing for similarity search. IEEE Trans Multimed 19(9):2033–2044

    Article  Google Scholar 

  22. Du L, Li X, Shen Y-D (2012) “Robust nonnegative matrix factorization via half-quadratic minimization,” In: 2012 IEEE 12th International Conference on Data Mining.IEEE, pp. 201–210

  23. Shen HT, Zhu Y, Zheng W, Zhu X, (2020) “Half-quadratic minimization for unsupervised feature selection on incomplete data,” IEEE Trans Neural Netw Learn Syst

  24. He R, Zheng W, Tan T, Sun Z (2014) Half-quadratic-based iterative minimization for robust sparse representation. IEEE Trans Pattern Anal Mach Intell 36(2):261–275

    Article  Google Scholar 

  25. He Y, Wang F, Li Y, Qin J, Chen B (2019) Robust matrix completion via maximum correntropy criterion and half-quadratic optimization. IEEE Trans Signal Process 68:181–195

    Article  MathSciNet  Google Scholar 

  26. Xiong K, Iu HH, Wang S (2020) “Kernel correntropy conjugate gradient algorithms based on half-quadratic optimization,” IEEE Trans Cybern

  27. Feng L, Sun H, Zhu J (2019) Robust image compressive sensing based on half-quadratic function and weighted schatten-p norm. Inf Sci 477:265–280

    Article  MathSciNet  Google Scholar 

  28. Nikolova M, Ng MK (2005) Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J Sci Comput 27(3):937–966

    Article  MathSciNet  Google Scholar 

  29. Nie F, Huang H, Cai X, Ding CHQ (2010) “Efficient and robust feature selection via joint \(l_{21}\)-norms minimization,” In: NIPS, pp. 1813–1821

  30. Lu G, Zou J, Wang Y, Wang Z (2016) L1-norm-based principal component analysis with adaptive regularization. Pattern Recogn 60:901–907

    Article  Google Scholar 

  31. Xu C, Tao D, Xu C (2013) “A survey on multi-view learning,” Computer Science, pp. 1–59,arXiv:1304.5634

  32. Wang H, Yang Y, Liu B (2019) GMC: Graph-based multi-view clustering. IEEE Trans Knowl Data Eng 32(6):1116–1129

    Article  Google Scholar 

  33. Kumar A, III HD (2011) “A co-training approach for multi-view spectral clustering,” In: ICML, pp. 393–400

  34. Ma F, Meng D, Dong X, Yang Y (2020) Self-paced multi-view co-training. J Mach Learn Res 21(57):1–38

    MathSciNet  MATH  Google Scholar 

  35. Nie F, Cai G, Li J, Li X (2017) Auto-weighted multi-view learning for image clustering and semi-supervised classification. IEEE Trans Image Process 27(3):1501–1511

    Article  MathSciNet  Google Scholar 

  36. Romero D, Ma M, Giannakis GB (2017) Kernel-based reconstruction of graph signals. IEEE Trans Signal Process 65(3):764–778

    Article  MathSciNet  Google Scholar 

  37. Wang Z, Zhu C, Niu Z, Gao D, Feng X (2014) Multi-kernel classification machine with reduced complexity. Knowl Based Syst 65:83–95

    Article  Google Scholar 

  38. Chao G, Sun S (2016) Multi-kernel maximum entropy discrimination for multi-view learning. Intell Data Anal 20(3):481–493

    Article  Google Scholar 

  39. Wang Z, Zhu Z, Li D (2020) Collaborative and geometric multi-kernel learning for multi-class classification. Pattern Recogn 99:107050

    Article  Google Scholar 

  40. Li J, Wu Y, Zhao J, Lu K (2017) Low-rank discriminant embedding for multiview learning. IEEE 47(11):3516–3529

    Google Scholar 

  41. Zhu X, Huang Z, Shen HT, Cheng J, Xu C (2012) Dimensionality reduction by mixed kernel canonical correlation analysis. Pattern Recogn 45(8):3003–3016

    Article  Google Scholar 

  42. Yang M, Deng C, Nie F (2019) Adaptive-weighting discriminative regression for multi-view classification. Pattern Recogn 88:236–245

    Article  Google Scholar 

  43. Fan R, Luo T, Zhuge W, Qiang S, Hou C (2020) Multi-view subspace learning via bidirectional sparsity. Pattern Recogn. 108:107524

    Article  Google Scholar 

  44. Nie F, Tian L, Li X (2018) “Multiview clustering via adaptively weighted procrustes,” In: KDD. ACM, pp. 2022–2030

  45. Tucker A, Vinciotti V, Liu X, Garway-Heath D (2005) A spatio-temporal bayesian network classifier for understanding visual field deterioration. Artif Intell Med 34(2):163–177

    Article  Google Scholar 

  46. Sheng VS, Ling CX (2006) “Thresholding for making classifiers cost-sensitive,” In: AAAI, pp. 476–481

  47. Elkan C (2001) “The foundations of cost-sensitive learning,” In: IJCAI, pp. 973–978

  48. Castro CL, de Pádua Braga A (2013) Novel cost-sensitive approach to improve the multilayer perceptron performance on imbalanced data. IEEE Trans Neural Netw Learn Syst 24(6):888–899

    Article  Google Scholar 

  49. Krishnamurthy A, Agarwal A, Huang T-K, Daumé III H, Langford J (2017) “Active learning for cost-sensitive classification,” In: International Conference on Machine Learning, pp. 1915–1924

  50. Liu M, Xu C, Luo Y, Xu C, Wen Y, Tao D (2017) “Cost-sensitive feature selection via f-measure optimization reduction,” In: AAAI, pp. 2252–2258

  51. Khan SH, Hayat M, Bennamoun M, Sohel FA, Togneri R (2017) Cost-sensitive learning of deep feature representations from imbalanced data. IEEE Trans Neural Netw Learn Syst 29(8):3573–3587

    Google Scholar 

  52. Zhao P, Zhang Y, Wu M, Hoi SC, Tan M, Huang J (2018) Adaptive cost-sensitive online classification. IEEE Trans Knowl Data Eng 31(2):214–228

    Article  Google Scholar 

  53. Huber PJ (2011) “Robust statistics,” In: International Encyclopedia of Statistical Science, pp. 1248–1251

  54. Nikolova M, Chan RH (2007) The equivalence of half-quadratic minimization and the gradient linearization iteration. IEEE Trans Image Process 16(6):1623–1627

    Article  MathSciNet  Google Scholar 

  55. Daubechies I, DeVore RA, Fornasier M, Güntürk CS (2008) “Iteratively re-weighted least squares minimization: Proof of faster than linear rate for sparse recovery,” In: CISS, pp. 26–29

  56. Xia R, Pan Y, Du L, Yin J (2014) “Robust multi-view spectral clustering via low-rank and sparse decomposition,” In: AAAI, pp. 2149–2155

  57. Ding Z, Shao M, Fu Y (2018) “Robust multi-view representation: A unified perspective from multi-view learning to domain adaption,” In: IJCAI, pp. 5434–5440

  58. Zhu X, Song B, Shi F, Chen Y, Hu R, Gan J, Zhang W, Li M, Wang L, Gao Y et al (2021) Joint prediction and time estimation of covid-19 developing severe symptoms using chest ct scan. Med Image Anal 67:101824

    Article  Google Scholar 

  59. Wen Z, Yin W (2013) A feasible method for optimization with orthogonality constraints. Math Program 142(1–2):397–434

    Article  MathSciNet  Google Scholar 

  60. Lütkepohl H (1996) Handbook of matrices, vol. 1

  61. Wu J-S, Zhou Z-H (2013) Sequence-based prediction of microrna-binding residues in proteins using cost-sensitive laplacian support vector machines. IEEE/ACM Trans Comput Biol Bioinf 10(3):752–759

    Article  Google Scholar 

  62. Nie F, Cai G, Li X (2017) “Multi-view clustering and semi-supervised classification with adaptive neighbours,” In: AAAI, pp. 2408–2414

  63. Zhang C, Yu Z, Hu Q, Zhu P, Liu X, Wang X (2018) “Latent semantic aware multi-view multi-label classification,” In: AAAI, pp. 4414–4421

  64. Wang Q, Guo Y, Wang J, Luo X, Kong X (2018) Multi-view analysis dictionary learning for image classification. IEEE Access 6:20 174-20 183

    Article  Google Scholar 

  65. Tao H, Hou C, Nie F, Zhu J, Yi D (2017) Scalable multi-view semi-supervised classification via adaptive regression. IEEE Trans Image Process 26(9):4283–4296

    Article  MathSciNet  Google Scholar 

  66. Nie F, Li J, Li X (2016) “Parameter-free auto-weighted multiple graph learning: A framework for multiview clustering and semi-supervised classification,” In: IJCAI, pp. 1881–1887

Download references

Acknowledgements

This work was partially supported by the Natural Science Foundation of China (Grants No: 61876046); the Natural Science Project of Guangxi Universities; the program of China Scholarships Council; the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing and the Research Fund of Guangxi Key Lab of Multisource Information Mining & Security (18-A-01-01);

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian Wei.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, Y., Tan, M. & Wei, J. Robust Multi-view Classification with Sample Constraints. Neural Process Lett 54, 2589–2612 (2022). https://doi.org/10.1007/s11063-021-10483-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-021-10483-0

Keywords

Navigation