Abstract:
Multi-view feature learning has garnered much attention recently since many real world data are comprised of different representations or views. How to explore the consen...Show MoreMetadata
Abstract:
Multi-view feature learning has garnered much attention recently since many real world data are comprised of different representations or views. How to explore the consensus structure and eliminate the inconsistency noise in different views remains a challenging problem in multi-view feature learning. In this paper, we propose a multi-way deep autoencoder for multi-view feature learning to explore the deep consensus structure and reconcile the efficiency of encoding process meanwhile. Through a multi-way encoding process, we embed the original data feature views to nonnegative representations of multiple levels which are structured hierarchically. Along the structure of embedded representations, we recover the diversity and important information layer by layer in the decoding process. The experiments on two image datasets show the superior performance of our method.
Published in: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 04-08 May 2020
Date Added to IEEE Xplore: 09 April 2020
ISBN Information: