Abstract:
With the arrival of the big data era, data come frequently with increasing volume and high dimensionality, which imposes a considerable challenge on data compression in n...Show MoreMetadata
Abstract:
With the arrival of the big data era, data come frequently with increasing volume and high dimensionality, which imposes a considerable challenge on data compression in network representation and analysis. How to learn an effective low-dimensional representation has a dramatic influence on performance of specific network learning tasks. In this paper, we propose an unsupervised embedding learning feature representation scheme by deep Siamese neural networks, aiming to learn an efficient low-dimensional feature subspace. Unsupervised embedding learning is one tough but interesting task in which its searching strategy is performed without the guidance of class label information. Siamese network is a neural network that can learn an efficient feature subspace in a supervised mode. It trains two networks with shared weights simultaneously, and feeds them with random sampling from the same dataset. As a result, the feature space is projected onto a low-dimensional subspace such that the similar samples are with small values close to zero, whereas the dissimilar ones come with big values greater than a predefined margin. We further discuss deep Siamese neural network in an unsupervised mode and its applications to embedding learning. The proposed method can be also used to address semi-supervised feature representation problems. Finally, the learned unsupervised embedding is validated on eight publicly available databases including images, voices, and text documents. Extensive experiments demonstrate the superiority of the proposed method against the compared existing state-of-the-art embedding approaches.
Published in: IEEE Transactions on Network Science and Engineering ( Volume: 7, Issue: 1, 01 Jan.-March 2020)