Multi-Level Cascade Sparse Representation Learning for Small Data Classification | IEEE Journals & Magazine | IEEE Xplore

Multi-Level Cascade Sparse Representation Learning for Small Data Classification


Abstract:

Deep learning (DL) methods have recently captured much attention for image classification. However, such methods may lead to a suboptimal solution for small-scale data si...Show More

Abstract:

Deep learning (DL) methods have recently captured much attention for image classification. However, such methods may lead to a suboptimal solution for small-scale data since the lack of training samples. Sparse representation stands out with its efficiency and interpretability, but its precision is not so competitive. We develop a Multi-Level Cascade Sparse Representation (ML-CSR) learning method to combine both advantages when processing small-scale data. ML-CSR is proposed using a pyramid structure to expand the training data size. It adopts two core modules, the Error-To-Feature (ETF) module, and the Generate-Adaptive-Weight (GAW) module, to further improve the precision. ML-CSR calculates the inter-layer differences by the ETF module to increase the diversity of samples and obtains adaptive weights based on the layer accuracy in the GAW module. This helps ML-CSR learn more discriminative features. State-of-the-art results on the benchmark face databases validate the effectiveness of the proposed ML-CSR. Ablation experiments demonstrate that the proposed pyramid structure, ETF, and GAW module can improve the performance of ML-CSR. The code is available at https://github.com/Zhongwenyuan98/ML-CSR.
Page(s): 2451 - 2464
Date of Publication: 14 November 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.