Abstract:
Generally, in order to learn sparse representations for raw inputs via an auto-encoder, the Kullback-Leibler (KL) divergence as a sparsity regularizer is introduced to th...Show MoreMetadata
Abstract:
Generally, in order to learn sparse representations for raw inputs via an auto-encoder, the Kullback-Leibler (KL) divergence as a sparsity regularizer is introduced to the loss function for penalizing active code units. In fact, there exist other sparsity regularizers except the KL divergence. This paper introduces some classical sparsity regularizers into auto-encoders, and empirically gives a survey on the auto-encoders with different sparsity regularizers. Specifically, we analyze another two sparsity regularizers which are usually used in sparse coding. In addition, we also consider the effect of different activation functions and different sparsity regularizers on learning performance of auto-encoders. Our experiments are conducted on the datasets of MNIST and COIL.
Date of Conference: 12-17 July 2015
Date Added to IEEE Xplore: 01 October 2015
ISBN Information: