Skip to main content
Log in

Leverage triple relational structures via low-rank feature reduction for multi-output regression

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Multi-output regression aims at learning a mapping from feature variables to multiple output variables. It is significant to utilize variety of inherent relational structure information of observations to conduct multi-output regression task when learning a best mapping from high-dimensional data. In this paper, we propose a new multi-output regression method, which simultaneously takes advantage of the low-rank constraint, sample selection, and feature selection in a unified framework. We first take the effect of low-rank constraint to search the correlation of output variables and impose 2,p -norm regularization on the coefficient matrix to capture the correlation between features and outputs. And then, the 2,p -norm on the loss function is designed to discover the correlation between samples, so as to select those informative samples to learn the model for improving predictive capacity. Thirdly, orthogonal subspace learning is exploited to ensure multi-output variables share the same low-rank structure of data by rotating the results of feature selection. In addition, to get the optimal solution of the objective function, we propose an effective iterative optimization algorithm. Finally, we conduct sets of experimental results on real datasets, and show the proposed method outperforms the state-of-the-art methods in terms of aCC and aRMSE.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Anderson T (1951) Estimating linear restrictions on regression coefficients for multivariate normal distributions. Ann Math Stat 22(3):327–351

    Article  MathSciNet  MATH  Google Scholar 

  2. Argyriou A, Evgeniou T, Pontil M (2006) Multi-task feature learning. Adv Neural Inf Process Syst 41–48

  3. Bache K, Lichman M UCI machine learning repository. http://archive.ics.uci.edu/ml, University of California, Irvine, School of Information and Computer Sciences

  4. Borchani H, Varando G, Bielza C et al (2015) A survey on multi-output regression. Data Min Knowl 5(5):216–233

    Google Scholar 

  5. Cai X, Ding C, Nie F (2013) On the equivalent of low-rank regressions and linear discriminant analysis based regressions. In: Proceedings of the 19th ACM SIGKDD, pp 1124–1132

  6. Cai X, Nie F, Cai W et al (2013) New graph structured sparsity model for multi-label image annotations. In: ICCV, pp 801–808

  7. Candes EJ, Recht B (2009) Exact matrix completion via convex optimization. Found Comput Math 9(6):717–772

    Article  MathSciNet  MATH  Google Scholar 

  8. Chang X, Nie F, Yang Y et al (2014) A convex formulation for semi-supervised multi-label feature selection. In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pp 1171–1177

  9. Chen B, Liu G, Huang Z et al (2011) Multi-task low-rank affinities pursuit for image segmentation. In: Proc. IEEE Int’l Conf. Computer Vision, pp 2439–2446

  10. Dzeroski S, Demsar D, Grbovic J (2000) Predicting chemical parameters of river water quality from bioindicator data. Appl Intell 13(1):7–17

    Article  Google Scholar 

  11. Feng J, Zhou L, Xu H et al (2014) Robust subspace segmentation with block-diagonal prior. In: CVPR, pp 3818–3825

  12. Gao L, Song J et al (2015) Learning in high-dimensional multimedia data: the state of the art. J Multimed Syst 1–11

  13. Gao L, Song J et al (2015) Optimal graph learning with partial tags and multiple features for image and video annotation. In: CVPR, pp 4371–4379

  14. Gao L, Song J et al (2016) Graph-without-cut: an ideal graph learning for image segmentation. In: AAAI Conference on Artificial Intelligence, pp 1188–1194

  15. Gower JC, Dijksterhuis GB (2004) Procrustes problems. Oxford Statistical Science Series, 30. Oxford, UK: Oxford University Press

  16. Izenman AJ (1975) Reduced-rank regression for the multivariate linear model. J Multivar Anal 5(2):248–264

    Article  MathSciNet  MATH  Google Scholar 

  17. Karalic A, Bratko I (1997) First order regression. Mach Learn 26(2):147–176

    Article  MATH  Google Scholar 

  18. Kocev D, Dzeroski S, White MD et al (2009) Using single-target and multi-target regression trees and ensembles to model a compound index of vegetation condition. Ecol Model 220(8):1159–1168

    Article  Google Scholar 

  19. Nie F, Huang H, Cai X et al (2010) Efficient and robust feature selection via joint L21-norms minimization. In: Proc. NIPS, pp 1813–1821

  20. Qin Y, Zhang S, Zhu X et al (2007) Semi-parametric optimization for missing data imputation. Appl Intell 27(1):79–88

    Article  MATH  Google Scholar 

  21. Rai P, Kumar A, Daumé H III (2012) Simultaneously leveraging output and task structures for multiple-output regression. Adv Neural Inf Proces Syst 25:1–9

    Google Scholar 

  22. Rothman AJ, Levina E, Zhu J (2010) Sparse multivariate regression with covariance estimation. J Comput Graph Stat 19(4):947–962

    Article  MathSciNet  Google Scholar 

  23. Spyromitros-Xioufis E, Tsoumakas G, Groves W et al (2016) Multi-target regression via input space expansion: treating targets as inputs. Mach Learn 1–44

  24. Spyromitros-Xious E, Groves W, Tsoumakas G et al (2012) Multi-label classification methods for multi-target regression. arXiv preprint arXiv:1211.6581, Cornell University Library, pp 1159–1168

  25. Tuia D, Verrelst J, Alonso L et al (2011) Multioutput support vector regression for remote sensing biophysical parameter estimation. IEEE Geosci Remote Sens Lett 8(4):804–808

    Article  Google Scholar 

  26. Wang S, Chang X, Li X, Sheng Q, Chen W et al (2016) Multi-task support vector machines for feature selection with shared knowledge discovery. Signal Process 120:746–753

    Article  Google Scholar 

  27. Wang H, Nie F, Huang H et al (2011) Sparse multi-task regression and feature selection to identify brain imaging predictors for memory performance. In: Proc IEEE Int Conf Comput Vis. pp 557–562. doi:10.1109/ICCV.2011.6126288

  28. Wu F, Yuan Y, Zhuang Y (2010) Heterogeneous feature selection by group lasso with logistic regression. In: ACM MM, pp 983–986

  29. Zhang M, Ding C, Zhang Y et al (2014) Feature selection at the discrete limit. In: Twenty-Eighth AAAI Conference on Artificial Intelligence, pp 1355–1361

  30. Zhang S, Qin Z, Ling CX et al (2005) “Missing is useful”: missing values in cost-sensitive decision trees. IEEE Trans Knowl Data Eng 17(12):1689–1693

    Article  Google Scholar 

  31. Zhao Y, Zhang S (2006) Generalized dimension-reduction framework for recent-biased time series analysis. IEEE Trans Knowl Data Eng 18(2):231–244

    Article  Google Scholar 

  32. Zhu X, Huang Z, Cheng H et al (2013) Sparse hashing for fast multimedia search. ACM Trans Inf Syst 31(2):9.1–9.24

    Article  Google Scholar 

  33. Zhu X, Huang Z, Cheng H et al (2013) Sparse hasing for fast multimedia search. ACM Trans Inf Syst 31(2)

  34. Zhu X, Huang Z, Shen HT et al (2012) Dimensionality reduction by mixed kernel canonical correlation analysis. Pattern Recogn 45(8):3003–3016

    Article  MATH  Google Scholar 

  35. Zhu X, Li X, Zhang S et al (2016) Robust joint graph sparse coding for unsupervised spectral feature selection. IEEE Transactions on Neural Networks and Learning Systems, PP(99), pp 1–13

  36. Zhu X, Li X, Zhang S (2015) Block-row sparse multiview multilabel learning for image classification. IEEE Trans Cybern 46(2):450–461

    Article  Google Scholar 

  37. Zhu X, Suk HI, Lee SW et al (2015) Subspace regularized sparse multi-task learning for multi-class neurodegenerative disease identification. IEEE Trans Biomed Eng 63(3):607–618

    Article  Google Scholar 

  38. Zhu X, Zhang S, Jin Z, Zhang Z, Zuoming X (2011) Missing value estimation for mixed-attribute datasets. IEEE Trans Knowl Data Eng 23(1):110–121

    Article  Google Scholar 

  39. Zhu P, Zuo W, Zhang L et al (2015) Unsupervised feature selection by regularized self-representation. Pattern Recogn 48(2):438–446

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the China “1000-Plan” National Distinguished Professorship; the National Natural Science Foundation of China (Grant Nos: 61450001, 61263035, 61573270 and 61672177); the China 973 Program (Grant No: 2013CB329404); the China Key Research Program (Grant No: 2016YFB1000905); the Guangxi Natural Science Foundation (Grant Nos: 2012GXNSFGA060004 and 2015GXNSFCB139011); the Innovation Project of Guangxi Graduate Education (Grant Nos: YCSZ2016046 and YCSZ2016045); the Guangxi Higher Institutions’ Program of Introducing 100 High-Level Overseas Talents; the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing; and the Guangxi Bagui Scholar Teams for Innovation and Research Project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shichao Zhang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, S., Yang, L., Deng, Z. et al. Leverage triple relational structures via low-rank feature reduction for multi-output regression. Multimed Tools Appl 76, 17461–17477 (2017). https://doi.org/10.1007/s11042-016-3980-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3980-3

Keywords

Navigation