Abstract:
Sentiment classification on spoken language transcriptions has received less attention. A practical system employing the spoken language modality will have to use a langu...Show MoreMetadata
Abstract:
Sentiment classification on spoken language transcriptions has received less attention. A practical system employing the spoken language modality will have to use a language transcription from an Automatic Speech Recognition (ASR) engine which is inherently prone to errors. The main interest of this paper lies in improvement of sentiment classification on erroneous ASR transcriptions. Our aim is to improve the representation of the ASR transcripts using the manual transcripts and other modalities, like audio and visual, that are available during training but not necessarily during test conditions. We adopt an approach based on Deep Canonical Correlation Analysis (DCCA) and propose two new extensions of DCCA to enhance the ASR view using multiple modalities. We present a detailed evaluation of the performance of our approach on datasets of opinion videos (CMU-MOSI and CMU-MOSEI) collected from Youtube.
Published in: 2018 IEEE Spoken Language Technology Workshop (SLT)
Date of Conference: 18-21 December 2018
Date Added to IEEE Xplore: 14 February 2019
ISBN Information: