Abstract
Real life data and information often have different ways to obtain. For example, in computer vision, we can describe an objective by different types, such as text, video and picture. And even from variety of angles. These different descriptors of the same object are usually called multi-view data. In ordinarily, dimensional reduction methods usually include feature selection and subspace learning, respectively, can have better interpretative capability and stabilizing performance, and now are very prevalent method for high-dimensional data. However, it is usually not considering the relationship among class indicators, so the performance of regression model is not very ideal. In this paper, we simultaneously consider feature selection, low-rank selection, and subspace learning into a unified framework. Specifically, under the framework of linear regression model, we first use the low-rank constraint to feature selection which considers two aspects of information inherent in data. The low-rank constraint takes the correlation of response variables into account, then embed an ℓ 2, p -norm regularizer to consider the correlation among variety of class indicators, and feature vectors and their corresponding response variables. Meanwhile, we take LDA algorithm which belong to the subspace learning to further adjust relevant feature selection results into account. Lastly, we conducted experiments on several real multi-views image sets and corresponding experimental consequences also validated the furnished method outperformed all comparison algorithms.


Similar content being viewed by others
References
Cai X, Ding C, Nie F, Huang H (2013) On the equivalent of low-rank linear regressions and linear discriminant analysis based regressions. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 1124–1132
Cao J, Wu Z, Wu J, Xiong H (2013) Sail: summation-based incremental learning for information-theoretic text clustering. IEEE Trans Cybern 43(2):570–584
Cao J, Wu Z, Wu J (2014) Scaling up cosine interesting pattern discovery: a depth-first method. Inf Sci 266(5):31–46
Cao Z, Wang Y, Sun Y, Du W, Liang Y (2015) A novel filter feature selection method for paired microarray expression data analysis. Int J Data Min Bioinforma 12(4):363–386
Chyzhyk D, Savio A, Graña M (2014) Evolutionary elm wrapper feature selection for alzheimer’s disease cad on anatomical brain mri. Neurocomputing 128:73–80
Gao L, Song J, Nie F, Yan Y (2015a) Optimal graph learning with partial tags and multiple features for image and video annotation. In: CVPR
Gao L, Song J, Shao J, Zhu X, Shen H (2015b) Zero-shot image categorization by image correlation exploration. In: ICMR, pp 487–490
Hoerl AE, Kennard RW (2000) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 42(1):80–86
Liu H, Ma Z, Zhang S, Wu X (2015a) Penalized partial least square discriminant analysis with ℓ 1-norm for multi-label data. Pattern Recogn 48(5):1724–1733
Liu X, Guo T, He L, Yang X (2015b) A low-rank approximation-based transductive support tensor machine for semisupervised classification. IEEE Trans Image Process 24(6):1825–1838
Luo D, Ding C H Q, Huang H (2011) Linear discriminant analysis: new formulations and overfit analysis. In: Proceedings of the twenty-fifth AAAI conference on artificial intelligence, AAAI 2011, San Francisco, p 2011
Maugis C, Celeux G, Martin-Magniette ML (2009) Variable selection for clustering with gaussian mixture models. Biometrics 65(3):701–709
Ozuysal M, Lepetit V, Fua P (2009) Pose estimation for category specific multiview object localization. In: IEEE conference on computer vision and pattern recognition, pp 778–785
Pohjalainen J, Rasanen O, Kadioglu S (2013) Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits. Computer Speech & Language 29(1):145–171
Qin Y, Zhang S, Zhu X, Zhang J, Zhang C (2007) Semi-parametric optimization for missing data imputation. Appl Intell 27(1):79–88
Rasiwasia N, Costa Pereira J, Coviello E, Doyle G, Lanckriet GRG, Levy R, Vasconcelos N (2010) A new approach to cross-modal multimedia retrieval. In: International conference on multimedia, pp 251–260
Shi X, Guo Z, Lai Z, Yang Y, Bao Z, Zhang D (2015) A framework of joint graph embedding and sparse regression for dimensionality reduction. IEEE Trans Image Process 24(4):1341–1355
Steven M S, Brian C, James D, Danniel S, Szeliski R (2006) A comparison and evaluation of multi-view stereo reconstruction algorithms. In: IEEE Computer Society Conference on Computer Vision & Pattern Recongnition, pp 519–528
Tabakhi S, Moradi P, Akhlaghian F (2014) An unsupervised feature selection algorithm based on ant colony optimization. Eng Appl Artif Intell 32(6):112–123
Tang Z, Zhang X, Li X, Zhang S (2016) Robust image hashing with ring partition and invariant vector distance. IEEE Trans Inf Forensics Secur 11(1):200–214
Unler A, Murat A, Chinnam RB (2011) mr 2 pso: a maximum relevance minimum redundancy feature selection method based on swarm intelligence for support vector machine classification. Inf Sci 181(20):4625–4641
Wang D, Zhang H, Liu R, Liu X, Wang J (2016) Unsupervised feature selection through gram–schmidt orthogonalizationa word co-occurrence perspective. Neurocomputing 173:845–854
Wang T, Qin Z, Zhang S, Zhang C (2012) Cost-sensitive classification with inadequate labeled data. Inf Syst 37(5):508–516
Weinland D, Boyer E, Ronfard R (2007) Action recognition from arbitrary views using 3d exemplars. In: International conference on multimedia, pp 1–7
You M, Liu J, Li G, Chen Y (2012) Embedded feature selection for multi-label classification of music emotions. Int J Comput Intell Syst 5(4):668–678
Zhang C, Qin Y, Zhu X, Zhang J, Zhang S (2006) Clustering-based missing value imputation for data preprocessing. In: IEEE international conference on industrial informatics, pp 1081–1086
Zhang S (2012a) Decision tree classifiers sensitive to heterogeneous costs. J Syst Softw 85(4):771–779
Zhang S (2012b) Nearest neighbor selection for iteratively knn imputation. J Syst Softw 85(11):2541–2552
Zhang S, Cheng D, Zong M, Gao L (2016a) Self-representation nearest neighbor search for classification. Neurocomputing 195:137–142
Zhang S, Li X, Zong M, Cheng D, Gao L (2016b) Learning k for knn classification. In: ACM Transactions on Intelligent Systems and Technology, (Accepted)
Zhu P, Zuo W, Zhang L, Hu Q, Shiu SCK (2015) Unsupervised feature selection by regularized self-representation. Pattern Recogn 48(2):438–446
Zhu X, Zhang S, Zhang J, Zhang C (2007) Cost-sensitive imputing missing values with ordering. AAAI Press 2:1922–1923
Zhu X, Zhang S, Jin Z, Zhang Z, Xu Z (2011) Missing value estimation for mixed-attribute data sets. IEEE Trans Knowl Data Eng 23(1):110–121
Zhu X, Huang Z, Shen H T, Cheng J, Xu C (2012) Dimensionality reduction by mixed kernel canonical correlation analysis. Pattern Recogn 45(8):3003–3016
Zhu X, Huang Z, Cheng H, Cui J, Shen HT (2013a) Sparse hashing for fast multimedia search. ACM Trans Inf Syst 31(2):9.1–9.24
Zhu X, Huang Z, Cui J, Shen HT (2013b) Video-to-shot tag propagation by graph sparse group lasso. IEEE Trans Multimed 15(3):633–646
Zhu X, Huang Z, Yang Y, Shen H T, Xu C, Luo J (2013c) Self-taught dimensionality reduction on the high-dimensional small-sized data. Pattern Recogn 46(1):215–229
Zhu X, Suk H I, Shen D (2014a) A novel matrix-similarity based loss function for joint regression and classification in ad diagnosis. NeuroImage 100:91–105
Zhu X, Zhang L, Huang Z (2014b) A sparse embedding and least variance encoding approach to hashing. IEEE Trans Image Process 23(9):3737–3750
Zhu X, Li X, Zhang S (2016) Block-row sparse multiview multilabel learning for image classification 46(2):450–461
Zhu Y, Lucey S (2015) Convolutional sparse coding for trajectory reconstruction. IEEE Transactions on Pattern Analysis & Machine Intelligence 37(3):529–540
Acknowledgments
This work was supported in part by the China “1000-Plan” National Distinguished Professorship; the Nation Natural Science Foundation of China (Grant No: 61263035, 61363009, 61573270 and 61672177), the China 973 Program (Grant No: 2013CB329404); the China Key Research Program (Grant No: 2016YFB1000905); the Guangxi Natural Science Foundation (Grant No: 2012GXNSFGA060004 and 2015GXNSFCB139011); the China Postdoctoral Science Foundation (Grant No: 2015M570837); the Innovation Project of Guangxi Graduate Education under grant YCSZ2016046; the Guangxi High Institutions’ Program of Introducing 100 High-Level Overseas Talents; the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing; and the Guangxi “Bagui” Teams for Innovation and Research.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hu, R., Cheng, D., He, W. et al. Low-rank feature selection for multi-view regression. Multimed Tools Appl 76, 17479–17495 (2017). https://doi.org/10.1007/s11042-016-4119-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-016-4119-2