Abstract
Semi-supervised image classification is widely applied in various pattern recognition tasks. Label propagation, which is a graph-based semi-supervised learning method, is very popular in solving the semi-supervised image classification problem. The most important step in label propagation is graph construction. To improve the quality of the graph, we consider the nonnegative constraint and the noise estimation, which is based on the least-squares regression (LSR). A novel graph construction method named as nonnegative least-squares regression (NLSR) is proposed in this paper. The nonnegative constraint is considered to eliminate subtractive combinations of coefficients and improve the sparsity of the graph. We consider both small Gaussian noise and sparse corrupted noise to improve the robustness of the NLSR. The experimental result shows that the nonnegative constraint is very significant in the NLSR. Weighted version of NLSR (WNLSR) is proposed to further eliminate ‘bridge’ edges. Local and global consistency (LGC) is considered as the semi-supervised image classification method. The label propagation error rate is regarded as the evaluation criterion. Experiments on image datasets show encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms in semi-supervised image classification, especially in improving LSR method significantly.







Similar content being viewed by others
References
Zhu, X.: Semi-supervised learning literature survey[J]. Technical report 1530, Department of Computer Science, University of Wisconsin-Madison, Madison (2005)
Chapelle, O., Schölkopf, B., Zien, A.: Semi-supervised learning[M]. MIT Press, Cambridge (2006)
Belkin, M., Niyogi, P., Sindhwani V.: On manifold regularization[C]. In: Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AISTAT 2005), pp. 17–24 (2005)
Zhu, X., Ghahramani, Z., Lafferty, J.: Semi-supervised learning using gaussian fields and harmonic functions[C]. ICML 3, 912–919 (2003)
Zhou, D., Bousquet, O., Lal, T.N., et al.: Learning with local and global consistency[J]. Adv. Neural Inf. Process. Sys. 16(16), 321–328 (2004)
Tenenbaum, J.B., De Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction[J]. Science 290(5500), 2319–2323 (2000)
Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation[J]. Neural Comput. 15(6), 1373–1396 (2003)
Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding[J]. Science 290(5500), 2323–2326 (2000)
Elhamifar, E., Vidal, R.: Sparse subspace clustering[C]. In: IEEE Conference on Computer Vision and Pattern Recognition, 2009 (CVPR 2009), pp. 2790–2797 (2009)
Lin, Z., Liu, R., Su, Z.: Linearized alternating direction method with adaptive penalty for low-rank representation[C]. Adv. Neural Inf. Process. Sys. 612–620 (2011)
Liu, G., Lin, Z., Yan, S., et al.: Robust recovery of subspace structures by low-rank representation[J]. IEEE Trans. Pattern. Anal. Mach. Intell. 35(1), 171–184 (2013)
Liu, G., Lin, Z., Yu, Y.: Robust subspace segmentation by low-rank representation[C]. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 663–670 (2010)
Lu, C.Y., Min, H., Zhao, Z.Q., et al.: Robust and efficient subspace segmentation via least squares regression[M]. In: Computer Vision–ECCV 2012, pp. 347–360. Springer, Berlin (2012)
Hu, H., Lin, Z., Feng, J., et al.: Smooth representation clustering[C]. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3834–3841 (2014)
Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices[J]. arXiv preprint arXiv:1009.5055 (2010)
Candès, E.J., Li, X., Ma, Y., et al.: Robust principal component analysis[J]. J. ACM (JACM) 58(3), 11 (2011)
Lee, K.C., Ho, J., Kriegman, D.: Acquiring linear subspaces for face recognition under variable lighting[J]. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005)
Tron, R., Vidal, R.: A benchmark for the comparison of 3-d motion segmentation algorithms[C]. In: IEEE Conference on Computer Vision and Pattern Recognition, 2007 (CVPR’07), pp. 1–8 (2007)
Chapelle, O., Zien, A.: Semi-supervised classification by low density separation[C]. In: Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics, pp. 57–64 (2005)
Hoyer, P.O.: Non-negative matrix factorization with sparseness constraints[J]. J. Mach. Learn. Res. 5, 1457–1469 (2004)
Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning[C]. In: Proceedings of IEEE International Conference on Machine Learning (2007)
Davis, J.V., Kulis, B., Jain, P., et al.: Information-theoretic metric learning[C]. In: Proceedings of the 24th International Conference on Machine Learning. ACM, pp. 209–216 (2007)
Kostinger, M., Hirzer, M., Wohlhart, P., Roth, P.M., Bischof, H.: Large scale metric learning from equivalence constraints[C]. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2288–2295, June 2012
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ren, WY., Tang, M., Peng, Y. et al. Semi-supervised image classification via nonnegative least-squares regression. Multimedia Systems 23, 725–738 (2017). https://doi.org/10.1007/s00530-016-0521-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00530-016-0521-x