Impact Statement:Existing deep SAEs do not consider the relationship between neighboring samples, similar samples, and the relationship be-tween samples with deep features and original fe...Show More
Abstract:
A stacked autoencoder (SAE) is a widely used deep network. However, existing deep SAEs focus on original samples without considering the hierarchical structural informati...Show MoreMetadata
Impact Statement:
Existing deep SAEs do not consider the relationship between neighboring samples, similar samples, and the relationship be-tween samples with deep features and original features (we call them hierarchical structural information). This limitation makes SAEs only obtain deep features for every original sample (we call this “flat” deep learning). With an improvement of 2.5% - 34% over the existing representative SAEs, the proposed NE_ESAE can realize deep learning on hierarchical structured samples, thereby obtaining deep features for different layers of structured samples and with more complementarity with the original features. In addition, the proposed NE_ESAE can real-ize cooperative deep sample and feature transformation. More importantly, this proposed model can be considered to be a framework rather than concrete model, in that the innovations achieved in this framework can be applied to other deep neural networks and transform their current work patterns.
Abstract:
A stacked autoencoder (SAE) is a widely used deep network. However, existing deep SAEs focus on original samples without considering the hierarchical structural information between samples. This limits the accuracy of the SAE. In recent years, state-of-the-art SAEs have suggested improvements in network structure, cost function, and parameter optimization, thereby the accuracy has been enhanced. However, the problem mentioned above is still not solved. Therefore, this article is concerned with how to design an SAE that can conduct deep learning on hierarchically structured samples. This proposed SAE—neighboring envelope embedded stacked autoencoder (NE_ESAE)—mainly consists of two parts. The first is the neighboring sample envelope learning mechanism (NSELM) that constructs sample pairs by combining neighboring samples. In addition, the NSELM constructs multilayer sample spaces by multilayer iterative mean clustering, which considers similar samples and generates layers of envelope sam...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 2, February 2024)