Reference Hub12
Analysis of Sentiment on Movie Reviews Using Word Embedding Self-Attentive LSTM

Analysis of Sentiment on Movie Reviews Using Word Embedding Self-Attentive LSTM

Soubraylu Sivakumar, Ratnavel Rajalakshmi
Copyright: © 2021 |Volume: 12 |Issue: 2 |Pages: 20
ISSN: 1941-6237|EISSN: 1941-6245|EISBN13: 9781799860273|DOI: 10.4018/IJACI.2021040103
Cite Article Cite Article

MLA

Sivakumar, Soubraylu, and Ratnavel Rajalakshmi. "Analysis of Sentiment on Movie Reviews Using Word Embedding Self-Attentive LSTM." IJACI vol.12, no.2 2021: pp.33-52. http://doi.org/10.4018/IJACI.2021040103

APA

Sivakumar, S. & Rajalakshmi, R. (2021). Analysis of Sentiment on Movie Reviews Using Word Embedding Self-Attentive LSTM. International Journal of Ambient Computing and Intelligence (IJACI), 12(2), 33-52. http://doi.org/10.4018/IJACI.2021040103

Chicago

Sivakumar, Soubraylu, and Ratnavel Rajalakshmi. "Analysis of Sentiment on Movie Reviews Using Word Embedding Self-Attentive LSTM," International Journal of Ambient Computing and Intelligence (IJACI) 12, no.2: 33-52. http://doi.org/10.4018/IJACI.2021040103

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

In the contemporary world, people share their thoughts rapidly in social media. Mining and extracting knowledge from this information for performing sentiment analysis is a complex task. Even though automated machine learning algorithms and techniques are available, and extraction of semantic and relevant key terms from a sparse representation of the review is difficult. Word embedding improves the text classification by solving the problem of sparse matrix and semantics of the word. In this paper, a novel architecture is proposed by combining long short-term memory (LSTM) with word embedding to extract the semantic relationship between the neighboring words and also a weighted self-attention is applied to extract the key terms from the reviews. Based on the experimental analysis on the IMDB dataset, the authors have shown that the proposed architecture word-embedding self-attention LSTM architecture achieved an F1 score of 88.67%, while LSTM and word embedding LSTM-based models resulted in an F1 score of 84.42% and 85.69%, respectively.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.