Skip to main content
Log in

Collaborative representation induced broad learning model for classification

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The broad learning system (BLS) is a novel flat neural network that is fast and effective in various pattern recognition and classification applications. Many researchers have investigated this learning approach due to its remarkable performance. However, the feature nodes used in BLS are mapped with random weights for the input data, which is inefficient and can lead to inferior results since the random mapping contains redundant and unpredictable information for constructing the feature nodes. To resolve this issue and improve BLS, in this study, we aim to present one representation induced method, i.e., the collaborative representation induced broad learning model (CRI_BLM), to replace the random mapping for producing the feature nodes. This proposed method introduces the collaborative representation technique to code the input training sample as a collaborative linear combination (coding coefficient) of all dictionary samples, before further generating the enhancement nodes under the broad learning framework for classification. Compared to the original feature nodes with random mapping, this approach can capture more effective features for pattern recognition and classification. Extensive experiments with several datasets and comparisons with various classifiers were investigated to confirm that our proposed CRI_BLM is remarkable and effective (e.g., obtaining the best result: 96.80% in the Fifteen Scene Categories database).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data Availability

We did not obtain ethical and informed consent since all the data are publicly available. The datasets generated during and/or analysed during the current study are available in: 1. The Extended Yale B dataset is available at: http://vision.ucsd.edu/iskwak/ExtYaleDatabase/ExtYaleB.html 2. The AR database is available at: http://www2.ece.ohio-state.edu/aleix/ARdatabase.html 3. The Fifteen Scene Categories database is available at: https://www.kaggle.com/datasets/zaiyankhan/15scene-dataset 4. The Fashion-MNIST database is available at: https://github.com/zalandoresearch/fashion-mnist 5. The Kuzushiji-MNIST database is available at: https://github.com/rois-codh/kmnist

References

  1. Abiodun OI, Jantan A, Omolara AE, Dada KV, Umar AM, Linus OU, Arshad H, Kazaure AA, Gana U, Kiru MU (2019) Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access 7:158820–158846

    Article  Google Scholar 

  2. Yang, Z.: Fmfo: Floating flame moth-flame optimization algorithm for training multi-layer perceptron classifier. Applied Intelligence, 1-21 (2022)

  3. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. nature 521(7553):436–444

    Google Scholar 

  4. Chen, C.P., Wan, J.Z.: A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 29(1), 62-72 (1999)

  5. Pao Y-H, Takefuji Y (1992) Functional-link net computing: theory, system architecture, and functionalities. Computer 25(5):76–79

    Article  Google Scholar 

  6. Pao Y-H, Park G-H, Sobajic DJ (1994) Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2):163–180

    Article  Google Scholar 

  7. Kumpati SN, Kannan P et al (1990) Identification and control of dynamical systems using neural networks. IEEE Transactions on neural networks 1(1):4–27

    Article  Google Scholar 

  8. Chen CP, Liu Z (2017) Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE transactions on neural networks and learning systems 29(1):10–24

    Article  MathSciNet  Google Scholar 

  9. Gong, X., Zhang, T., Chen, C.P., Liu, Z.: Research review for broad learning system: Algorithms, theory, and applications. IEEE Transactions on Cybernetics (2021)

  10. Jara-Maldonado M, Alarcon-Aquino V, Rosas-Romero R (2022) A new machine learning model based on the broad learning system and wavelets. Engineering Applications of Artificial Intelligence 112:104886

    Article  Google Scholar 

  11. Cai, X., Feng, X., Yu, H.: Broad learning algorithm of cascaded enhancement nodes based on phase space reconstruction. Applied Intelligence, 1-11 (2022)

  12. Zhao H, Zheng J, Xu J, Deng W (2019) Fault diagnosis method based on principal component analysis and broad learning system. IEEE Access 7:99263–99272

    Article  Google Scholar 

  13. Sheng B, Li P, Zhang Y, Mao L, Chen CP (2020) Greensea: visual soccer analysis using broad learning system. IEEE Transactions on Cybernetics 51(3):1463–1477

    Article  Google Scholar 

  14. Wang B, Zhao Y, Chen CP (2021) Hybrid transfer learning and broad learning system for wearing mask detection in the covid-19 era. IEEE Transactions on Instrumentation and Measurement 70:1–12

    Article  Google Scholar 

  15. Chen L, Li M, Lai X, Hirota K, Pedrycz W (2020) Cnn-based broad learning with efficient incremental reconstruction model for facial emotion recognition. IFAC-PapersOnLine 53(2):10236–10241

    Article  Google Scholar 

  16. Mou M, Zhao X (2022) Gated broad learning system based on deep cascaded for soft sensor modeling of industrial process. IEEE Transactions on Instrumentation and Measurement 71:1–11

    Article  Google Scholar 

  17. Yang, K., Liu, Y., Yu, Z., Chen, C.P.: Extracting and composing robust features with broad learning system. IEEE Transactions on Knowledge and Data Engineering (2021)

  18. Olshausen BA, Field DJ (1997) Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research 37(23):3311–3325

    Article  Google Scholar 

  19. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2008) Robust face recognition via sparse representation. IEEE transactions on pattern analysis and machine intelligence 31(2):210–227

    Article  Google Scholar 

  20. Xu J, An W, Zhang L, Zhang D (2019) Sparse, collaborative, or nonnegative representation: which helps pattern classification? Pattern Recognition 88:679–688

    Article  Google Scholar 

  21. Mairal J, Bach F, Ponce J, Sapiro G, Zisserman A (2009) Non-local sparse models for image restoration. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2272-2279. IEEE

  22. Dong W, Zhang L, Shi G, Wu X (2011) Image deblurring and superresolution by adaptive sparse domain selection and adaptive regularization. IEEE Transactions on image processing 20(7):1838–1857

    Article  MathSciNet  MATH  Google Scholar 

  23. Zhang L, Yang M, Feng X, Ma Y, Zhang, D (2012) Collaborative representation based classification for face recognition. arXiv preprint arXiv:1204.2358

  24. Li J, Zhang H, Zhang L, Huang X, Zhang L (2014) Joint collaborative representation with multitask learning for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing 52(9):5923–5936

    Article  Google Scholar 

  25. Boyali A, Hashimoto N (2016) Spectral collaborative representation based classification for hand gestures recognition on electromyography signals. Biomedical Signal Processing and Control 24:11–18

    Article  Google Scholar 

  26. Zhang L, Li L, Yang A, Shen Y, Yang M (2017) Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach. Pattern Recognition 69:199–212

    Article  Google Scholar 

  27. Yang M, Zhang L (2010) Gabor feature based sparse representation for face recognition with gabor occlusion dictionary. In: European Conference on Computer Vision, pp. 448-461. Springer

  28. Zhang L, Yang M, Feng X (2011) Sparse representation or collaborative representation: Which helps face recognition? In: 2011 International Conference on Computer Vision, pp. 471-478. IEEE

  29. Gong M, Liu J, Li H, Cai Q, Su L (2015) A multiobjective sparse feature learning model for deep neural networks. IEEE transactions on neural networks and learning systems 26(12):3263–3277

    Article  MathSciNet  Google Scholar 

  30. Cai S, Zhang L., Zuo W, Feng X (2016) A probabilistic collaborative representation based approach for pattern classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2950-2959

  31. Fukunaga K, Narendra PM (1975) A branch and bound algorithm for computing k-nearest neighbors. IEEE transactions on computers 100(7):750–753

    Article  MATH  Google Scholar 

  32. Zhang Q, Zhang, B (2021) Low rank based discriminative least squares regression with sparse autoencoder processing for image classification. In: 2021 7th International Conference on Computer and Communications (ICCC), pp. 836-840. IEEE

  33. Chang C-C, Lin C-J (2011) Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST) 2(3):1–27

    Article  Google Scholar 

  34. Wen J, Xu Y, Li Z, Ma Z, Xu Y (2018) Inter-class sparsity based discriminative least square regression. Neural Networks 102:36–47

    Article  MATH  Google Scholar 

  35. Fang X, Han N, Wu J, Xu Y, Yang J, Wong WK, Li X (2018) Approximate low-rank projection learning for feature extraction. IEEE transactions on neural networks and learning systems 29(11):5228–5241

    Article  MathSciNet  Google Scholar 

  36. Zhao S, Zhang B, Li, S (2020) Discriminant and sparsity based least squares regression with l 1 regularization for feature representation. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1504-1508. IEEE

  37. Georghiades AS, Belhumeur PN, Kriegman DJ (2001) From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE transactions on pattern analysis and machine intelligence 23(6):643–660

    Article  Google Scholar 

  38. Martinez A, Benavente, R (1998) The ar face database: Cvc technical report, 24

  39. Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPRÃ06), vol. 2, pp. 2169-2178. IEEE

  40. Jiang Z, Lin Z, Davis LS (2013) Label consistent k-svd: Learning a discriminative dictionary for recognition. IEEE transactions on pattern analysis and machine intelligence 35(11):2651–2664

    Article  Google Scholar 

  41. Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747

  42. Clanuwat T, Bober-Irizar M, Kitamoto A, Lamb A, Yamamoto K, Ha D (2018) Deep learning for classical japanese literature. arXiv preprint arXiv:1812.01718

  43. Ruxton GD (2006) The unequal variance t-test is an underused alternative to studentà t-test and the mann-whitney u test. Behavioral Ecology 17(4):688–690

  44. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510-4520

  45. Qin Z, Qiu Y, Sun H, Lu Z, Wang Z, Shen Q, Pan H (2020) A novel approximation methodology and its efficient vlsi implementation for the sigmoid function. IEEE Transactions on Circuits and Systems II: Express Briefs 67(12):3422–3426

  46. Zhou J, Zeng S, Zhang B (2021) Sparsity-induced graph convolutional network for semisupervised learning. IEEE Transactions on Artificial Intelligence 2(6):549–563

  47. Sui Y, Zhang S, Zhang L (2015) Robust visual tracking via sparsity-induced subspace learning. IEEE Transactions on Image Processing 24(12):4686–4700

    Article  MathSciNet  MATH  Google Scholar 

  48. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25

Download references

Acknowledgements

This work was supported by the University of Macau (File no. MYRG2019-00006-FST).

Author information

Authors and Affiliations

Authors

Contributions

Qi Zhang: Conceptualization, Methodology, Validation, Investigation, Writing - original draft, Formal analysis Jianhang Zhou: Methodology, Investigation, Formal analysis Yong Xu: Methodology, Writing - review and editing Bob Zhang: Resources, Supervision, Writing - review and editing.

Corresponding author

Correspondence to Bob Zhang.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Q., Zhou, J., Xu, Y. et al. Collaborative representation induced broad learning model for classification. Appl Intell 53, 23442–23456 (2023). https://doi.org/10.1007/s10489-023-04709-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04709-y

Keywords

Navigation