skip to main content
survey

A Survey of Cutting-edge Multimodal Sentiment Analysis

Published: 25 April 2024 Publication History

Abstract

The rapid growth of the internet has reached the fourth generation, i.e., web 4.0, which supports Sentiment Analysis (SA) in many applications such as social media, marketing, risk management, healthcare, businesses, websites, data mining, e-learning, psychology, and many more. Sentiment analysis is a powerful tool for governments, businesses, and researchers to analyse users’ emotions and mental states in order to generate opinions and reviews about products, services, and daily activities. In the past years, several SA techniques based on Machine Learning (ML), Deep Learning (DL), and other soft computing approaches were proposed. However, growing data size, subjectivity, and diversity pose a significant challenge to enhancing the efficiency of existing techniques and incorporating current development trends, such as Multimodal Sentiment Analysis (MSA) and fusion techniques. With the aim of assisting the enthusiastic researcher to navigating the current trend, this article presents a comprehensive study of various literature to handle different aspects of SA, including current trends and techniques across multiple domains. In order to clarify the future prospects of MSA, this article also highlights open issues and research directions that lead to a number of unresolved challenges.

References

[1]
Sarah A. Abdu, Ahmed H. Yousef, and Ashraf Salem. 2021. Multimodal video sentiment analysis using deep learning approaches, a survey. Information Fusion 76 (2021), 204–226.
[2]
Samar Al-Saqqa, Heba Abdel-Nabi, and Arafat Awajan. 2018. A survey of textual emotion detection. In 2018 8th International Conference on Computer Science and Information Technology (CSIT). IEEE, 136–142.
[3]
Firoj Alam and Giuseppe Riccardi. 2014. Predicting personality traits using multimodal information. In Proceedings of the 2014 ACM Multi Media on Workshop on Computational Personality Recognition. 15–18.
[4]
Salma Alhagry, Aly Aly Fahmy, and Reda A. El-Khoribi. 2017. Emotion recognition based on EEG using LSTM recurrent neural network. Emotion 8, 10 (2017), 355–358.
[5]
Nourah Alswaidan and Mohamed El Bachir Menai. 2020. A survey of state-of-the-art approaches for emotion recognition in text. Knowledge and Information Systems (2020), 1–51.
[6]
Deepali Aneja, Bindita Chaudhuri, Alex Colburn, Gary Faigin, Linda Shapiro, and Barbara Mones. 2018. Learning to generate 3D stylized character expressions from humans. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 160–169.
[7]
Vassilis Athitsos, Carol Neidle, Stan Sclaroff, Joan Nash, Alexandra Stefan, Quan Yuan, and Ashwin Thangali. 2008. The American Sign Language lexicon video dataset. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 1–8.
[8]
Pradeep K. Atrey, M. Anwar Hossain, Abdulmotaleb El Saddik, and Mohan S. Kankanhalli. 2010. Multimodal fusion for multimedia analysis: A survey. Multimedia Systems 16, 6 (2010), 345–379.
[9]
Yusuf Aytar, Carl Vondrick, and Antonio Torralba. 2016. SoundNet: Learning sound representations from unlabeled video. In Advances in Neural Information Processing Systems. 892–900.
[10]
Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In LREC, Vol. 10. 2200–2204.
[11]
Alexandra Balahur, Rada Mihalcea, and Andrés Montoyo. 2014. Computational approaches to subjectivity and sentiment analysis: Present and envisaged methods and applications. Computer Speech & Language 28, 1 (2014), 1–6.
[12]
Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency. 2016. OpenFace: An open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1–10.
[13]
Anil Bandhakavi, Nirmalie Wiratunga, Stewart Massie, and Deepak Padmanabhan. 2017. Lexicon generation for emotion detection from text. IEEE Intelligent Systems 32, 1 (2017), 102–108.
[14]
Raoul Biagioni. 2015. The SenticNet Sentiment Lexicon: Exploring Semantic Richness in Multi-Word Concepts. Ph. D. Dissertation. Springer.
[15]
Marouane Birjali, Mohammed Kasri, and Abderrahim Beni-Hssane. 2021. A comprehensive survey on sentiment analysis: Approaches, challenges and trends. Knowledge-Based Systems 226 (2021), 107134.
[16]
Javad Birjandtalab, Diana Cogan, Maziyar Baran Pouyan, and Mehrdad Nourani. 2016. A non-EEG biosignals dataset for assessment and visualization of neurological status. In 2016 IEEE International Workshop on Signal Processing Systems (SiPS). IEEE, 110–114.
[17]
Victoria Bloom, Dimitrios Makris, and Vasileios Argyriou. 2012. G3D: A gaming action dataset and real time action recognition evaluation framework. In 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 7–12.
[18]
Paul Boersma and Vincent Van Heuven. 2001. Speak and unSpeak with PRAAT. Glot International 5, 9/10 (2001), 341–347.
[19]
Erik Boiy and Marie-Francine Moens. 2009. A machine learning approach to sentiment analysis in multilingual Web texts. Information Retrieval 12, 5 (2009), 526–558.
[20]
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for knowledge graph construction. In Association for Computational Linguistics (ACL) 4762–4779.
[21]
Lachezar Bozhkov, Petia Koprinkova-Hristova, and Petia Georgieva. 2016. Learning to decode human emotions with Echo State Networks. Neural Networks 78 (2016), 112–119.
[22]
Felipe Bravo-Marquez, Marcelo Mendoza, and Barbara Poblete. 2014. Meta-level sentiment models for big social data analysis. Knowledge-Based Systems 69 (2014), 86–99.
[23]
Felix Burkhardt, Astrid Paeschke, Miriam Rolfes, Walter F. Sendlmeier, and Benjamin Weiss. 2005. A database of German emotional speech. In Ninth European Conference on Speech Communication and Technology.
[24]
Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N. Chang, Sungbok Lee, and Shrikanth S. Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Language Resources and Evaluation 42, 4 (2008), 335.
[25]
Carlos Busso, Angeliki Metallinou, and Shrikanth S. Narayanan. 2011. Iterative feature normalization for emotional speech detection. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 5692–5695.
[26]
Erik Cambria, Dipankar Das, Sivaji Bandyopadhyay, and Antonio Feraco. 2017. A Practical Guide to Sentiment Analysis. Springer.
[27]
Erik Cambria, Newton Howard, Jane Hsu, and Amir Hussain. 2013. Sentic blending: Scalable multimodal fusion for the continuous interpretation of semantics and sentics. In 2013 IEEE Symposium on Computational Intelligence for Human-Like Intelligence (CIHLI). IEEE, 108–117.
[28]
Erik Cambria, Daniel Olsher, and Dheeraj Rajagopal. 2014. SenticNet 3: A common and common-sense knowledge base for cognition-driven sentiment analysis. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence. 1515–1521.
[29]
George Caridakis, Konstantinos Moutselos, and Ilias Maglogiannis. 2013. Natural interaction expressivity modeling and analysis. In Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments. 1–6.
[30]
Ankush Chatterjee, Kedhar Nath Narahari, Meghana Joshi, and Puneet Agrawal. 2019. SemEval-2019 Task 3: Emocontext contextual emotion detection in text. In Proceedings of the 13th International Workshop on Semantic Evaluation. 39–48.
[31]
Arindam Chatterjere, Vineeth Guptha, Parul Chopra, and Amitava Das. 2020. Minority positive sampling for switching points-an anecdote for the code-mixing language modeling. In Proceedings of the 12th Language Resources and Evaluation Conference. 6228–6236.
[32]
Despoina Chatzakou, Athena Vakali, and Konstantinos Kafetsios. 2017. Detecting variation of emotions in online activities. Expert Systems with Applications 89 (2017), 318–332.
[33]
Lawrence S. Chen, Hai Tao, Thomas S. Huang, Tsutomu Miyasato, and Ryohei Nakatsu. 1998. Emotion recognition from audiovisual information. In 1998 IEEE Second Workshop on Multimedia Signal Processing (Cat. No. 98EX175). IEEE, 83–88.
[34]
Mingyu Chen, Ghassan AlRegib, and Biing-Hwang Juang. 2012. A new 6D motion gesture database and the benchmark results of feature-based statistical recognition. In 2012 IEEE International Conference on Emerging Signal Processing Applications. IEEE, 131–134.
[35]
Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrušaitis, Amir Zadeh, and Louis-Philippe Morency. 2017. Multimodal sentiment analysis with word-level fusion and reinforcement learning. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. 163–171.
[36]
Tao Chen, Damian Borth, Trevor Darrell, and Shih-Fu Chang. 2014. DeepSentiBank: Visual sentiment concept classification with deep convolutional neural networks. arXiv preprint arXiv:1410.8586 (2014).
[37]
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. arXiv preprint arXiv:1808.07036 (2018).
[38]
Yoonjung Choi, Janyce Wiebe, and Rada Mihalcea. 2017. Coarse-grained+/-effect word sense disambiguation for implicit sentiment analysis. IEEE Transactions on Affective Computing 8, 4 (2017), 471–479.
[39]
Christopher Conly, Paul Doliotis, Pat Jangyodsuk, Rommel Alonzo, and Vassilis Athitsos. 2013. Toward a 3D body part detection video dataset and hand tracking benchmark. In Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments. 1–6.
[40]
Timothy F. Cootes, Gareth J. Edwards, and Christopher J. Taylor. 2001. Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 6 (2001), 681–685.
[41]
Alfredo Cuzzocrea and Enzo Mumolo. 2021. Dempster-Shafer-based fusion of multi-modal biometrics for supporting identity verification effectively and efficiently. In 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS). IEEE, 1–8.
[42]
Nadia F. F. Da Silva, Eduardo R. Hruschka, and Estevam R. Hruschka Jr. 2014. Tweet sentiment analysis with classifier ensembles. Decision Support Systems 66 (2014), 170–179.
[43]
Ringki Das and Thoudam Doren Singh. 2023. Multimodal sentiment analysis: A survey of methods, trends and challenges. Comput. Surveys (2023).
[44]
D. Datcu and L. Rothkrantz. 2009. Multimodal recognition of emotions in car environments. DCI&I 2009 (2009).
[45]
Dragos Datcu and Leon J. M. Rothkrantz. 2014. Semantic audio-visual data fusion for automatic emotion recognition. Emotion Recognition: A Pattern Analysis Approach (2014), 411–435.
[46]
Alex Marino Goncalves de Almeida, Sylvio Barbon, and Emerson Cabrera Paraiso. 2016. Multi-class emotions classification by sentic levels as features in sentiment analysis. In 2016 5th Brazilian Conference on Intelligent Systems (BRACIS). IEEE, 486–491.
[47]
Lingjia Deng. 2017. Entity/Event-Level Sentiment Detection and Inference. Ph. D. Dissertation. University of Pittsburgh.
[48]
Lingjia Deng and Janyce Wiebe. 2014. Sentiment propagation via implicature constraints. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. 377–385.
[49]
Zhi-Hong Deng, Kun-Hu Luo, and Hong-Liang Yu. 2014. A study of supervised term weighting scheme for sentiment analysis. Expert Systems with Applications 41, 7 (2014), 3506–3513.
[50]
Paolo Detti, Giampaolo Vatti, and Garazi Zabalo Manrique de Lara. 2020. EEG synchronization analysis for seizure prediction: A study on data of noninvasive recordings. Processes 8, 7 (2020), 846.
[51]
David DeVault, Ron Artstein, Grace Benn, Teresa Dey, Ed Fast, Alesia Gainer, Kallirroi Georgila, Jon Gratch, Arno Hartholt, Margaux Lhommet, Gale Lucas, Stacy Marsella, Fabrizio Morbini, Angela Nazarian, Stefan Scherer, Giota Stratou, Apar Suri, David Traum, Rachel Wood, Yuyu Xu, Albert Rizzo, and Louis-Philippe Morency. 2014. SimSensei Kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-Agent Systems. 1061–1068.
[52]
Luca Dini and André Bittar. 2016. Emotion analysis on Twitter: The hidden challenge. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16). 3953–3958.
[53]
Sidney K. D’Mello and Jacqueline Kory. 2015. A review and meta-analysis of multimodal affect detection systems. ACM Computing Surveys (CSUR) 47, 3 (2015), 1–36.
[54]
Peter Sheridan Dodds and Christopher M. Danforth. 2010. Measuring the happiness of large-scale written expression: Songs, blogs, and presidents. Journal of Happiness Studies 11, 4 (2010), 441–456.
[55]
Yasmina Douiji and Hajar Mousanif. 2015. I-CARE: Intelligent context aware system for recognizing emotions from text. In 2015 10th International Conference on Intelligent Systems: Theories and Applications (SITA). IEEE, 1–5.
[56]
Olivier Duchenne, Ivan Laptev, Josef Sivic, Francis Bach, and Jean Ponce. 2009. Automatic annotation of human actions in video. In 2009 IEEE 12th International Conference on Computer Vision. IEEE, 1491–1498.
[57]
Moataz El Ayadi, Mohamed S. Kamel, and Fakhri Karray. 2011. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognition 44, 3 (2011), 572–587.
[58]
Joseph G. Ellis, Brendan Jou, and Shih-Fu Chang. 2014. Why we watch the news: A dataset for exploring sentiment in broadcast video news. In Proceedings of the 16th International Conference on Multimodal Interaction. 104–111.
[59]
Andrea Esuli and Fabrizio Sebastiani. 2006. SentiWordNet: A publicly available lexical resource for opinion mining. In LREC, Vol. 6. Citeseer, 417–422.
[60]
Florian Eyben, Martin Wöllmer, and Björn Schuller. 2009. OpenEAR–introducing the munich open-source emotion and affect recognition toolkit. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, 1–6.
[61]
Golnoosh Farnadi, Geetha Sitaraman, Mehrdad Rohani, Michal Kosinski, David Stillwell, Marie-Francine Moens, Sergio Davalos, and Martine De Cock. 2014. How are you doing?: Emotions and personality in Facebook. In 2nd Workshop on Emotions and Personality in Personalized Services (EMPIRE 2014); Workshop at the 22nd Conference on User Modelling, Adaptation and Personalization (UMAP 2014). 45–56.
[62]
Christiane Fellbaum (Ed.). 1998. WordNet: An Electronic Lexical Database. MIT Press.
[63]
Antonio Fernández-Caballero, Arturo Martínez-Rodrigo, José Manuel Pastor, José Carlos Castillo, Elena Lozano-Monasor, María T. López, Roberto Zangróniz, José Miguel Latorre, and Alicia Fernández-Sotos. 2016. Smart environment architecture for emotion detection and regulation. Journal of Biomedical Informatics 64 (2016), 55–73.
[64]
Simon Fothergill, Helena Mentis, Pushmeet Kohli, and Sebastian Nowozin. 2012. Instructing people for training gestural interactive systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1737–1746.
[65]
Steve Fox, Kuldeep Karnawat, Mark Mydland, Susan Dumais, and Thomas White. 2005. Evaluating implicit measures to improve web search. ACM Trans. Inf. Syst. 23, 2 (April2005), 147–168. DOI:
[66]
E. Friesen and Paul Ekman. 1978. Facial action coding system: A technique for the measurement of facial movement. Palo Alto 3 (1978).
[67]
Sujit Fulse, Rekha Sugandhi, and Anjali Mahajan. 2014. A survey on multimodal sentiment analysis. International Journal of Engineering Research and Technology 3, 11 (2014), 1233–1238.
[68]
Adrien Gaidon, Zaid Harchaoui, and Cordelia Schmid. 2011. Actom sequence models for efficient action detection. In CVPR 2011. IEEE, 3201–3208.
[69]
Björn Gambäck and Amitava Das. 2016. Comparing the level of code-switching in corpora. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16). 1850–1855.
[70]
Ankita Gandhi, Kinjal Adhvaryu, Soujanya Poria, Erik Cambria, and Amir Hussain. 2023. Multimodal sentiment analysis: A systematic review of history, datasets, multimodal fusion methods, applications, challenges and future directions. Information Fusion 91 (2023), 424–444.
[71]
Miguel A. García-González, Ariadna Argelagós-Palau, Mireya Fernández-Chimeno, and Juan Ramos-Castro. 2013. A comparison of heartbeat detectors for the seismocardiogram. In Computing in Cardiology 2013. IEEE, 461–464.
[72]
Aitor García-Pablos, Montse Cuadros, and German Rigau. 2018. W2VLDA: Almost unsupervised system for aspect based sentiment analysis. Expert Systems with Applications 91 (2018), 127–137.
[73]
Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 776–780.
[74]
Manoochehr Ghiassi, James Skinner, and David Zimbra. 2013. Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network. Expert Systems with Applications 40, 16 (2013), 6266–6282.
[75]
Deepak Ghimire and Joonwhoan Lee. 2013. Geometric feature-based facial expression recognition in image sequences using multi-class AdaBoost and support vector machines. Sensors 13, 6 (2013), 7714–7734.
[76]
Deepak Ghimire, Joonwhoan Lee, Ze-Nian Li, Sunghwan Jeong, Sang Hyun Park, and H. Sub Choi. 2015. Recognition of facial expressions based on tracking and selection of discriminative geometric features. International Journal of Multimedia and Ubiquitous Engineering 10, 3 (2015), 35–44.
[77]
Che Gilbert and Erric Hutto. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth International Conference on Weblogs and Social Media (ICWSM-14), Vol. 81. 82.
[78]
Nikolaos Gkalelis, Hansung Kim, Adrian Hilton, Nikos Nikolaidis, and Ioannis Pitas. 2009. The i3DPost multi-view and 3D human action/interaction database. In 2009 Conference for Visual Media Production. IEEE, 159–168.
[79]
Milan Gnjatovic and Dietmar Rosner. 2010. Inducing genuine emotions in simulated speech-based human-machine interaction: The NIMITEK corpus. IEEE Transactions on Affective Computing 1, 2 (2010), 132–144.
[80]
Lena Gorelick, Moshe Blank, Eli Shechtman, Michal Irani, and Ronen Basri. 2007. Actions as space-time shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 12 (2007), 2247–2253.
[81]
Gutemberg Guerra-Filho and Arnab Biswas. 2012. The human motion database: A cognitive and parametric sampling of human motion. Image and Vision Computing 30, 3 (2012), 251–261.
[82]
Hatice Gunes and Massimo Piccardi. 2005. Fusing face and body display for bi-modal emotion recognition: Single frame analysis and multi-frame post integration. In International Conference on Affective Computing and Intelligent Interaction. Springer, 102–111.
[83]
Shikha Gupta, Jafreezal Jaafar, and Wan Fatimah Wan Ahmad. 2012. Static hand gesture recognition using local Gabor filter. Procedia Engineering 41 (2012), 827–832.
[84]
Ivan Habernal, Tomáš Ptáček, and Josef Steinberger. 2014. Supervised sentiment analysis in Czech social media. Information Processing & Management 50, 5 (2014), 693–707.
[85]
Olivier Habimana, Yuhua Li, Ruixuan Li, Xiwu Gu, and Ge Yu. 2020. Sentiment analysis using deep learning approaches: An overview. Science China Information Sciences 63, 1 (2020), 1–36.
[86]
Douiji yasmina, Mousannif Hajar, and Al Moatassime Hassan. 2016. Using YouTube comments for text-based emotion recognition. Procedia Computer Science 83 (2016), 292–299.
[87]
Kun Han, Dong Yu, and Ivan Tashev. 2014. Speech emotion recognition using deep neural network and extreme learning machine. In Interspeech. ISCA, 223–227.
[88]
S. L. Happy, Priyadarshi Patnaik, Aurobinda Routray, and Rajlakshmi Guha. 2015. The Indian spontaneous expression database for emotion recognition. IEEE Transactions on Affective Computing 8, 1 (2015), 131–142.
[89]
Ann-Kathrin Hartmann, Marx E. Tommaso, D. Moussallem, G. Publio, A. Valdestilhas, D. Esteves, and C. B. Neto. 2018. Generating a large dataset for neural question answering over the DBpedia knowledge base. In Workshop on Linked Data Management, WEBBR.
[90]
Maryam Hasan, Emmanuel Agu, and Elke Rundensteiner. 2014. Using hashtags as labels for supervised learning of emotions in Twitter messages. In ACM SIGKDD Workshop on Health Informatics, New York, USA.
[91]
Maryam Hasan, Elke Rundensteiner, and Emmanuel Agu. 2014. EMOTEX: Detecting emotions in Twitter messages. (2014).
[92]
Catherine Havasi, Robert Speer, and James Pustejovsky. 2009. Automatically suggesting semantic structure for a generative Lexicon ontology. In Proceedings of Fifth International Conference on Generative Approaches to the Lexicon.
[93]
Javier Hernandez, Pablo Paredes, Asta Roseway, and Mary Czerwinski. 2014. Under pressure: Sensing stress of computer users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 51–60.
[94]
Shawn Hershey, Sourish Chaudhuri, Daniel P. W. Ellis, Jort F. Gemmeke, Aren Jansen, R. Channing Moore, Manoj Plakal, Devin Platt, Rif A. Saurous, Bryan Seybold, Malcolm Slaney, Ron J. Weiss, and Kevin Wilson. 2017. CNN architectures for large-scale audio classification. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 131–135.
[95]
Alexander Hogenboom, Bas Heerschop, Flavius Frasincar, Uzay Kaymak, and Franciska de Jong. 2014. Multi-lingual support for lexicon-based sentiment analysis guided by semantics. Decision Support Systems 62 (2014), 43–53.
[96]
Yi Hong, Quannan Li, Jiayan Jiang, and Zhuowen Tu. 2011. Learning a mixture of sparse distance metrics for classification and dimensionality reduction. In 2011 International Conference on Computer Vision. IEEE, 906–913.
[97]
Hao Hu, Ming-Xing Xu, and Wei Wu. 2007. GMM supervector based SVM with spectral features for speech emotion recognition. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Vol. 4. IEEE, IV–413.
[98]
Faliang Huang, Shichao Zhang, Jilian Zhang, and Ge Yu. 2017. Multimodal learning for topic sentiment analysis in microblogging. Neurocomputing 253 (2017), 144–153.
[99]
Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4700–4708.
[100]
Jie Huang, Wengang Zhou, Houqiang Li, and Weiping Li. 2015. Sign language recognition using 3D convolutional neural networks. In 2015 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 1–6.
[101]
Wegdan A. Hussien, Yahya M. Tashtoush, Mahmoud Al-Ayyoub, and Mohammed N. Al-Kabi. 2016. Are emoticons good enough to train emotion classifiers of Arabic tweets?. In 2016 7th International Conference on Computer Science and Information Technology (CSIT). IEEE, 1–6.
[102]
Gary Imai. 2005. Gestures: Body language and nonverbal communication. Retrieved Oct. 2020 (2005).
[103]
Maryam Imani and Gholam Ali Montazer. 2019. A survey of emotion recognition methods with emphasis on e-learning environments. Journal of Network and Computer Applications 147 (2019), 102423.
[104]
K. Indhuja and Raj P. C. Reghu. 2014. Fuzzy logic based sentiment analysis of product review documents. In 2014 First International Conference on Computational Systems and Communications (ICCSC). IEEE, 18–22.
[105]
M. Rizzo Irfan, M. Ali Fauzi, T. Tibyani, and N. Dyah Mentari. 2018. Twitter sentiment analysis on 2013 curriculum using ensemble features and k-nearest neighbor. International Journal of Electrical and Computer Engineering (IJECE) 8, 6 (2018), 5409–14.
[106]
Vinay Kumar Jain, Shishir Kumar, and Steven Lawrence Fernandes. 2017. Extraction of emotions from multilingual text using intelligent text processing and computational linguistics. Journal of Computational Science 21 (2017), 316–326.
[107]
Noppadon Jatupaiboon, Setha Pan-ngum, and Pasin Israsena. 2013. Real-time EEG-based happiness detection system. The Scientific World Journal 618649 (2013), 1–12.
[108]
S. Jerritta, M. Murugappan, R. Nagarajan, and Khairunizam Wan. 2011. Physiological signals based human emotion recognition: A review. In 2011 IEEE 7th International Colloquium on Signal Processing and its Applications. IEEE, 410–415.
[109]
Janusz Jezewski, Adam Matonia, Tomasz Kupka, Dawid Roj, and Robert Czabanski. 2012. Determination of fetal heart rate from abdominal signals: Evaluation of beat-to-beat accuracy in relation to the direct fetal electrocardiogram. Biomedical Engineering/BiomedizinisTechnik 57, 5 (2012), 383–394.
[110]
Alistair E. W. Johnson, Tom J. Pollard, Lu Shen, H. Lehman Li-Wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G. Mark. 2016. MIMIC-III, a freely accessible critical care database. Scientific Data 3, 1 (2016), 1–9.
[111]
Julia Jorgensen, George A. Miller, and Dan Sperber. 1984. Test of the mention theory of irony. Journal of Experimental Psychology: General 113, 1 (1984), 112.
[112]
Aditya Joshi, Pushpak Bhattacharyya, and Mark J. Carman. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR) 50, 5 (2017), 1–22.
[113]
Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). 757–762.
[114]
Takeo Kanade, Jeffrey F. Cohn, and Yingli Tian. 2000. Comprehensive database for facial expression analysis. In Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580). IEEE, 46–53.
[115]
Mangi Kang, Jaelim Ahn, and Kichun Lee. 2018. Opinion mining using ensemble text hidden Markov models for text classification. Expert Systems with Applications 94 (2018), 218–227.
[116]
Reshma Kar, Aruna Chakraborty, Amit Konar, and Ramadoss Janarthanan. 2013. Emotion recognition system by gesture analysis using fuzzy sets. In International Conference on Swarm, Evolutionary, and Memetic Computing. Springer, 354–363.
[117]
Manohar Karki, Qun Liu, Robert DiBiano, Saikat Basu, and Supratik Mukhopadhyay. 2018. Pixel-level reconstruction and classification for noisy handwritten Bangla characters. In 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR). IEEE, 511–516.
[118]
Charalampos Karyotis, Faiyaz Doctor, Rahat Iqbal, Anne James, and Victor Chang. 2018. A fuzzy computational model of emotion for cloud based sentiment analysis. Information Sciences 433 (2018), 448–463.
[119]
Bob Kemp, Aeilko H. Zwinderman, Bert Tuk, Hilbert A. C. Kamphuisen, and Josefien J. L. Oberye. 2000. Analysis of a sleep-dependent neuronal feedback loop: The slow-wave microcontinuity of the EEG. IEEE Transactions on Biomedical Engineering 47, 9 (2000), 1185–1194.
[120]
Jonghwa Kim and Elisabeth André. 2008. Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 12 (2008), 2067–2083.
[121]
Nam Kim, Alex Krasner, Colin Kosinski, Michael Wininger, Maria Qadri, Zachary Kappus, Shabbar Danish, and William Craelius. 2016. Trending autoregulatory indices during treatment for traumatic brain injury. Journal of Clinical Monitoring and Computing 30, 6 (2016), 821–831.
[122]
Sesong Kim, Dong-Wook Kim, and Seung-Won Jung. 2017. Comparison and analysis of facial expression database. In Proceedings of the Korea Information Processing Society Conference. Korea Information Processing Society, 686–687.
[123]
Soo-Min Kim and Eduard Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news media text. In Proceedings of the Workshop on Sentiment and Subjectivity in Text. 1–8.
[124]
Tae-Kyun Kim and Roberto Cipolla. 2008. Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 8 (2008), 1415–1428.
[125]
Svetlana Kiritchenko, Xiaodan Zhu, and Saif M. Mohammad. 2014. Sentiment analysis of short informal texts. Journal of Artificial Intelligence Research 50 (2014), 723–762.
[126]
Sander Koelstra, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. 2011. DEAP: A database for emotion analysis; using physiological signals. IEEE Transactions on Affective Computing 3, 1 (2011), 18–31.
[127]
Dimitrios Kollias, Attila Schulc, Elnar Hajiyev, and Stefanos Zafeiriou. 2020. Analysing affective behavior in the first ABAW 2020 competition. arXiv preprint arXiv:2001.11409 (2020).
[128]
Dimitrios Kollias, Panagiotis Tzirakis, Mihalis A. Nicolaou, Athanasios Papaioannou, Guoying Zhao, Björn Schuller, Irene Kotsia, and Stefanos Zafeiriou. 2019. Deep affect prediction in-the-wild: Aff-Wild database and challenge, deep architectures, and beyond. International Journal of Computer Vision 127, 6-7 (2019), 907–929.
[129]
Dimitrios Kollias and Stefanos Zafeiriou. 2019. Expression, affect, action unit recognition: Aff-Wild2, multi-task learning and ArcFace. arXiv preprint arXiv:1910.04855 (2019).
[130]
Efstratios Kontopoulos, Christos Berberidis, Theologos Dergiades, and Nick Bassiliades. 2013. Ontology-based sentiment analysis of Twitter posts. Expert Systems with Applications 40, 10 (2013), 4065–4074.
[131]
Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the OMG!. In Fifth International AAAI Conference on Weblogs and Social Media. Citeseer.
[132]
Janez Kranjc, Jasmina Smailović, Vid Podpečan, Miha Grčar, Martin Žnidaršič, and Nada Lavrač. 2015. Active learning for sentiment analysis on data streams: Methodology and workflow implementation in the ClowdFlows platform. Information Processing & Management 51, 2 (2015), 187–203.
[133]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097–1105.
[134]
Akshi Kumar and Arunima Jaiswal. 2020. Systematic literature review of sentiment analysis on Twitter using soft computing techniques. Concurrency and Computation: Practice and Experience 32, 1 (2020), e5107.
[135]
Jyotish Kumar and Jyoti kumar. 2016. Affective modelling of users in HCI using EEG. Procedia Computer Science 84 (2016), 107–114. DOI:Proceeding of the Seventh International Conference onIntelligent Human Computer Interaction (IHCI 2015).
[136]
Nitin Kumar, Kaushikee Khaund, and Shyamanta M. Hazarika. 2016. Bispectral analysis of EEG for emotion recognition. Procedia Computer Science 84 (2016), 31–35.
[137]
Ludmila I. Kuncheva. 2002. A theoretical study on six classifier fusion strategies. IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 2 (2002), 281–286.
[138]
Alexey Kurakin, Zhengyou Zhang, and Zicheng Liu. 2012. A real time system for dynamic hand gesture recognition with a depth sensor. In 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO). IEEE, 1975–1979.
[139]
Oh-Wook Kwon, Kwokleung Chan, Jiucang Hao, and Te-Won Lee. 2003. Emotion recognition by speech signals. In Eighth European Conference on Speech Communication and Technology.
[140]
Prashant Lahane and Arun Kumar Sangaiah. 2015. An approach to EEG based emotion recognition and classification using kernel density estimation. Procedia Computer Science 48 (2015), 574–581.
[141]
Yash Kumar Lal, Vaibhav Kumar, Mrinal Dhar, Manish Shrivastava, and Philipp Koehn. 2019. De-mixing sentiment from code-mixed text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. 371–377.
[142]
S. Lalitha, Sahruday Patnaik, T. H. Arvind, Vivek Madhusudhan, and Shikha Tripathi. 2014. Emotion recognition through speech signal for human-computer interaction. In 2014 Fifth International Symposium on Electronic System Design. IEEE, 217–218.
[143]
Oliver Langner, Ron Dotsch, Gijsbert Bijlstra, Daniel H. J. Wigboldus, Skyler T. Hawk, and Ad van Knippenberg. 2010. Presentation and Validation of the Radboud Faces Database. Cognition and Emotion 24, 8 (2010), 1377–1388.
[144]
Raymond Y. K. Lau, Chunping Li, and Stephen S. Y. Liao. 2014. Social analytics: Learning fuzzy product ontologies for aspect-oriented sentiment analysis. Decision Support Systems 65 (2014), 80–94.
[145]
Chul Min Lee and Shrikanth S. Narayanan. 2005. Toward detecting emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing 13, 2 (2005), 293–303.
[146]
Sophia Yat Mei Lee, Ying Chen, and Chu-Ren Huang. 2010. A text-driven rule-based system for emotion cause detection. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. 45–53.
[147]
Sophia Yat Mei Lee and Zhongqing Wang. 2015. Multi-view learning for emotion detection in code-switching texts. In 2015 International Conference on Asian Language Processing (IALP). IEEE, 90–93.
[148]
Jingsheng Lei, Yanghui Rao, Qing Li, Xiaojun Quan, and Liu Wenyin. 2014. Towards building a social emotion detection system for online news. Future Generation Computer Systems 37 (2014), 438–448.
[149]
Clement Levallois. 2013. Umigon: Sentiment analysis for tweets based on lexicons and heuristics. In Proceedings of the International Workshop on Semantic Evaluation, SemEval, Vol. 13, 1–4.
[150]
Hao Li and Wei Lu. 2017. Learning latent sentiment scopes for entity-level sentiment analysis. In AAAI. 3482–3489.
[151]
Jiwei Li and Eduard Hovy. 2017. Reflections on sentiment/opinion analysis. In A Practical Guide to Sentiment Analysis. Springer, 41–59.
[152]
Li-Jia Li, Hao Su, Li Fei-Fei, and Eric P. Xing. 2010. Object bank: A high-level image representation for scene classification & semantic feature sparsification. In Advances in Neural Information Processing Systems. 1378–1386.
[153]
Xiaodong Li, Haoran Xie, Li Chen, Jianping Wang, and Xiaotie Deng. 2014. News impact on stock price return via sentiment analysis. Knowledge-Based Systems 69 (2014), 14–23.
[154]
Zheng Lian, Ya Li, Jianhua Tao, and Jian Huang. 2018. Investigation of multimodal features, classifiers and fusion methods for emotion recognition. arXiv preprint arXiv:1809.06225 (2018).
[155]
Jiguang Liang, Ping Liu, Jianlong Tan, and Shuo Bai. 2014. Sentiment classification based on AS-LDA model. In ITQM. 511–516.
[156]
Jing Liao, Yaxin Bi, and Chris Nugent. 2010. Using the Dempster–Shafer theory of evidence with a revised lattice structure for activity recognition. IEEE Transactions on Information Technology in Biomedicine 15, 1 (2010), 74–82.
[157]
Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1736–1745.
[158]
Yi-Lin Lin and Gang Wei. 2005. Speech emotion recognition based on HMM and SVM. In 2005 International Conference on Machine Learning and Cybernetics, Vol. 8. IEEE, 4898–4901.
[159]
Zhe Lin, Zhuolin Jiang, and Larry S. Davis. 2009. Recognizing actions by shape-motion prototype trees. In 2009 IEEE 12th International Conference on Computer Vision. IEEE, 444–451.
[160]
Bing Liu. 2010. Sentiment analysis and subjectivity. Handbook of Natural Language Processing 2, 2010 (2010), 627–666.
[161]
Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies 5, 1 (2012), 1–167.
[162]
Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Mining Text Data. Springer, 415–463.
[163]
Cheng-Lin Liu, Fei Yin, Da-Han Wang, and Qiu-Feng Wang. 2013. Online and offline handwritten Chinese character recognition: Benchmarking on new databases. Pattern Recognition 46, 1 (2013), 155–162.
[164]
Jiamin Liu, Yuanqi Su, and Yuehu Liu. 2017. Multi-modal emotion recognition with temporal-band attention based on LSTM-RNN. In Pacific Rim Conference on Multimedia. Springer, 194–204.
[165]
Li Liu and Ling Shao. 2013. Learning discriminative representations from RGB-D video data. In Twenty-Third International Joint Conference on Artificial Intelligence.
[166]
Nianjun Liu and Brian C. Lovell. 2003. Gesture classification using hidden Markov models and Viterbi path counting. In VIIth Digital Image Computing: Techniques and Applications. 273–282.
[167]
Qun Liu, Edward Collier, and Supratik Mukhopadhyay. 2019. PCGAN-CHAR: Progressively trained classifier generative adversarial networks for classification of noisy handwritten Bangla characters. In International Conference on Asian Digital Libraries. Springer, 3–15.
[168]
Qian Liu, Zhiqiang Gao, Bing Liu, and Yuanlin Zhang. 2015. Automated rule selection for aspect extraction in opinion mining. In Twenty-Fourth International Joint Conference on Artificial Intelligence.
[169]
Shuhua Monica Liu and Jiun-Hung Chen. 2015. A multi-label classification based approach for sentiment classification. Expert Systems with Applications 42, 3 (2015), 1083–1093.
[170]
Steven R. Livingstone and Frank A. Russo. 2018. The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PloS One 13, 5 (2018), e0196391.
[171]
David Llorens, Federico Prat, Andrés Marzal, Juan Miguel Vilar, María José Castro, Juan-Carlos Amengual, Sergio Barrachina, Antonio Castellanos, Salvador Espana Boquera, Jon Ander Gómez, J. Gorbe, A. Gordo, V. Palazón, G. Peris, R. Ramos-Garijo, and F. Zamora. 2008. The UJIpenchars database: A pen-based database of isolated handwritten characters. In LREC.
[172]
Elena Lloret, Alexandra Balahur, Manuel Palomar, and Andrés Montoyo. 2009. Towards building a competitive opinion summarization system: Challenges and keys. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium. 72–77.
[173]
Claudio Loconsole, Catarina Runa Miranda, Gustavo Augusto, Antonio Frisoli, and Verónica Orvalho. 2014. Real-time emotion recognition novel method for geometrical facial features extraction. In 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Vol. 1. IEEE, 378–385.
[174]
Irene Lopatovska. 2009. Emotional Aspects of the Online Information Retrieval Process. Ph. D. Dissertation. Rutgers University-Graduate School-New Brunswick.
[175]
Irene Lopatovska and Ioannis Arapakis. 2011. Theories, methods and current research on emotions in library and information science, information retrieval and human–computer interaction. Information Processing & Management 47, 4 (2011), 575–592.
[176]
Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. 2010. The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. IEEE, 94–101.
[177]
Chunling Ma, Helmut Prendinger, and Mitsuru Ishizuka. 2005. Emotion estimation and reasoning based on affective textual interaction. In International Conference on Affective Computing and Intelligent Interaction. Springer, 622–628.
[178]
Noor Alhusna Madzlan, JingGuang Han, Francesca Bonin, and Nick Campbell. 2014. Automatic recognition of attitudes in video blogs–prosodic and visual feature analysis. In Fifteenth Annual Conference of the International Speech Communication Association. 1826–1830.
[179]
Upal Mahbub, Hafiz Imtiaz, Tonmoy Roy, Md. Shafiur Rahman, and Md. Atiqur Rahman Ahad. 2013. A template matching approach of one-shot-learning gesture recognition. Pattern Recognition Letters 34, 15 (2013), 1780–1788.
[180]
Fawaz H. H. Mahyoub, Muazzam A. Siddiqui, and Mohamed Y. Dahab. 2014. Building an Arabic sentiment lexicon using semi-supervised learning. Journal of King Saud University-Computer and Information Sciences 26, 4 (2014), 417–424.
[181]
Anima Majumder, Laxmidhar Behera, and Venkatesh K. Subramanian. 2014. Emotion recognition from geometric facial features using self-organizing map. Pattern Recognition 47, 3 (2014), 1282–1293.
[182]
Lori Malatesta, Stylianos Asteriadis, George Caridakis, Asimina Vasalou, and Kostas Karpouzis. 2016. Associating gesture expressivity with affective representations. Engineering Applications of Artificial Intelligence 51 (2016), 124–135.
[183]
Muharram Mansoorizadeh and Nasrollah Moghaddam Charkari. 2010. Multimodal information fusion application to human emotion recognition from face and speech. Multimedia Tools and Applications 49, 2 (2010), 277–297.
[184]
Hubert Mara, Jan Hering, and Susanne Kromker. 2009. GPU based optical character transcription for ancient inscription recognition. In 2009 15th International Conference on Virtual Systems and Multimedia. IEEE, 154–159.
[185]
Giulio Marin, Fabio Dominio, and Pietro Zanuttigh. 2014. Hand gesture recognition with leap motion and Kinect devices. In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 1565–1569.
[186]
Giulio Marin, Fabio Dominio, and Pietro Zanuttigh. 2016. Hand gesture recognition with jointly calibrated leap motion and depth sensor. Multimedia Tools and Applications 75, 22 (2016), 14991–15015.
[187]
Ali Marstawi, Nurfadhlina Mohd Sharef, Teh Noranis Mohd Aris, and Aida Mustapha. 2017. Ontology-based aspect extraction for an improved sentiment analysis in summarization of product reviews. In Proceedings of the 8th International Conference on Computer Modeling and Simulation. 100–104.
[188]
Marcin Marszalek, Ivan Laptev, and Cordelia Schmid. 2009. Actions in context. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2929–2936.
[189]
Olivier Martin, Irene Kotsia, Benoit Macq, and Ioannis Pitas. 2006. The eNTERFACE’05 audio-visual emotion database. In 22nd International Conference on Data Engineering Workshops (ICDEW’06). IEEE, 8–8.
[190]
Alexander Mathews, Lexing Xie, and Xuming He. 2015. SentiCap: Generating image descriptions with sentiments. arXiv preprint arXiv:1510.01431 (2015).
[191]
Ana Matran-Fernandez and Riccardo Poli. 2017. Towards the automated localisation of targets in rapid image-sifting by collaborative brain-computer interfaces. PLoS One 12, 5 (2017), e0178498.
[192]
S. Mohammad Mavadati, Mohammad H. Mahoor, Kevin Bartlett, Philip Trinh, and Jeffrey F. Cohn. 2013. DISFA: A spontaneous facial action intensity database. IEEE Transactions on Affective Computing 4, 2 (2013), 151–160.
[193]
Diana G. Maynard and Mark A. Greenwood. 2014. Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In LREC 2014 Proceedings. ELRA, 4238–4243.
[194]
Daniel McDuff, Rana El Kaliouby, Jeffrey F. Cohn, and Rosalind W. Picard. 2014. Predicting ad liking and purchase intent: Large-scale analysis of facial responses to ads. IEEE Transactions on Affective Computing 6, 3 (2014), 223–235.
[195]
Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: Modeling facets and opinions in Weblogs. In Proceedings of the 16th International Conference on World Wide Web. 171–180.
[196]
Franziska Meier, Evangelos Theodorou, Freek Stulp, and Stefan Schaal. 2011. Movement segmentation using a primitive library. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 3407–3412.
[197]
Alvise Memo and Pietro Zanuttigh. 2018. Head-mounted gesture controlled interface for human-computer interaction. Multimedia Tools and Applications 77, 1 (2018), 27–53.
[198]
Jesús P. Mena-Chalco, Luiz Velho, and R. M. Cesar Junior. 2011. 3D human face reconstruction using principal components spaces. In Proceedings of WTD SIBGRAPI Conference on Graphics, Patterns and Images, Vol. 6.
[199]
Ross Messing, Chris Pal, and Henry Kautz. 2009. Activity recognition using the velocity histories of tracked keypoints. In 2009 IEEE 12th International Conference on Computer Vision. IEEE, 104–111.
[200]
Y. Mikio. 1996. Interface system based on hand gestures and verbal expressions for 3-D shape generation, Terebijon Gakkaishi. Journal of the Institute of Television Engineers of Japan. v50 i10 (1996), 1482–1488.
[201]
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
[202]
Memo Alvise, Ludovico Minto, and Pietro Zanuttigh. 2015. Exploiting silhouette descriptors and synthetic data for hand gesture recognition. In Smart Tools and Apps for Graphics-Eurographics Italian Chapter Conference. Eurographics, 15–23.
[203]
Saif Mohammad. 2012. # Emotional tweets. In * SEM 2012: The First Joint Conference on Lexical and Computational Semantics–Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012). 246–255.
[204]
Saif Mohammad. 2018. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 174–184.
[205]
Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval-2018 Task 1: Affect in tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation. 1–17.
[206]
Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. 26–34.
[207]
Saif M. Mohammad. 2016. Sentiment analysis: Detecting valence, emotions, and other affectual states from text. In Emotion Measurement. Elsevier, 201–237.
[208]
Saif M. Mohammad and Felipe Bravo-Marquez. 2017. Emotion intensities in tweets. arXiv preprint arXiv:1708.03696 (2017).
[209]
Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word–emotion association lexicon. Computational Intelligence 29, 3 (2013), 436–465.
[210]
Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor. 2017. AffectNet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing 10, 1 (2017), 18–31.
[211]
Arturo Montejo-Raez, Manuel Carlos Díaz-Galiano, Fernando Martinez-Santiago, and L. A. Ureña-López. 2014. Crowd explicit sentiment analysis. Knowledge-Based Systems 69 (2014), 134–139.
[212]
Arturo Montejo-Ráez, Eugenio Martínez-Cámara, M. Teresa Martín-Valdivia, and L. Alfonso Ureña-López. 2014. Ranked WordNet graph for sentiment polarity classification in Twitter. Computer Speech & Language 28, 1 (2014), 93–107.
[213]
G. B. Moody. 2008. The physionet/computers in cardiology challenge 2008: T-wave alternans. In 2008 Computers in Cardiology. IEEE, 505–508.
[214]
Rodrigo Moraes, João. Francisco Valiati, and Wilson P Gavião Neto. 2013. Document-level sentiment classification: An empirical comparison between SVM and ANN. Expert Systems with Applications 40, 2 (2013), 621–633.
[215]
Louis-Philippe Morency, Rada Mihalcea, and Payal Doshi. 2011. Towards multimodal sentiment analysis: Harvesting opinions from the web. In Proceedings of the 13th International Conference on Multimodal Interfaces. 169–176.
[216]
S. Mu-Chun. 2003. A neural network-based approach to recognizing 3D arm movement. Biomedical Engineering Application. Basis and Communication 15, 1 (2003), 17–26.
[217]
Mahmoud Nabil, Mohamed Aly, and Amir Atiya. 2015. ASTD: Arabic sentiment tweets dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 2515–2519.
[218]
Ramesh Nagarajan, Zhigang Fan, and Shivang Patel. 2008. Method for automated image indexing and retrieval. US Patent 7,324,711.
[219]
Fatma Nasoz, Kaye Alvarez, Christine L. Lisetti, and Neal Finkelstein. 2004. Emotion recognition from physiological signals using wireless sensors for presence technologies. Cognition, Technology & Work 6, 1 (2004), 4–14.
[220]
Ishna Neamatullah, Margaret M. Douglass, H. Lehman Li-wei, Andrew Reisner, Mauricio Villarroel, William J. Long, Peter Szolovits, George B. Moody, Roger G. Mark, and Gari D. Clifford. 2008. Automated de-identification of free-text medical records. BMC Medical Informatics and Decision Making 8, 1 (2008), 32.
[221]
Daniel Neiberg, Kjell Elenius, and Kornel Laskowski. 2006. Emotion recognition in spontaneous speech using GMMs. In Ninth International Conference on Spoken Language Processing. 809–812.
[222]
Shahla Nemati and Ahmad Reza Naghsh-Nilchi. 2017. Exploiting evidential theory in the fusion of textual, audio, and visual modalities for affective music video retrieval. In 2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA). IEEE, 222–228.
[223]
Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka. 2009. Compositionality principle in recognition of fine-grained emotions from text. In Third International AAAI Conference on Weblogs and Social Media.
[224]
Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka. 2010. @AM: Textual attitude analysis model. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. 80–88.
[225]
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang. 2018. MS MARCO: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268v3 (2018).
[226]
Mihalis A. Nicolaou, Yannis Panagakis, Stefanos Zafeiriou, and Maja Pantic. 2014. Robust canonical correlation analysis: Audio-visual fusion for learning continuous interest. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1522–1526.
[227]
Vangelis P. Oikonomou, Georgios Liaros, Kostantinos Georgiadis, Elisavet Chatzilari, Katerina Adam, Spiros Nikolopoulos, and Ioannis Kompatsiaris. 2016. Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs. arXiv preprint arXiv:1602.00904 (2016).
[228]
Alvaro Ortigosa, José M. Martín, and Rosa M. Carro. 2014. Sentiment analysis in Facebook and its application to e-learning. Computers in Human Behavior 31 (2014), 527–541.
[229]
Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 380–390.
[230]
Marco Paleari and Benoit Huet. 2008. Toward emotion indexing of multimedia excerpts. In 2008 International Workshop on Content-Based Multimedia Indexing. IEEE, 425–432.
[231]
Yixiong Pan, Peipei Shen, and Liping Shen. 2012. Speech emotion recognition using support vector machine. International Journal of Smart Home 6, 2 (2012), 101–108.
[232]
Pavitra Patel, Anand Chaudhari, Ruchita Kale, and M. Pund. 2017. Emotion recognition from speech with Gaussian mixture models & via boosted GMM. International Journal of Research In Science & Engineering 3 (2017), 47–53.
[233]
Reinhard Pekrun, Thomas Goetz, Anne C. Frenzel, Petra Barchfeld, and Raymond P. Perry. 2011. Measuring emotions in students’ learning and performance: The achievement emotions questionnaire (AEQ). Contemporary Educational Psychology 36, 1 (2011), 36–48.
[234]
Luis-Alberto Perez-Gaspar, Santiago-Omar Caballero-Morales, and Felipe Trujillo-Romero. 2016. Multimodal emotion recognition with evolutionary computation for human-robot interaction. Expert Systems with Applications 66 (2016), 42–61.
[235]
Verónica Pérez-Rosas, Rada Mihalcea, and Louis-Philippe Morency. 2013. Utterance-level multimodal sentiment analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 973–982.
[236]
Isidoros Perikos and Ioannis Hatzilygeroudis. 2013. Recognizing emotion presence in natural language sentences. In International Conference on Engineering Applications of Neural Networks. Springer, 30–39.
[237]
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018).
[238]
Lionel Pigou, Sander Dieleman, Pieter-Jan Kindermans, and Benjamin Schrauwen. 2014. Sign language recognition using convolutional neural networks. In European Conference on Computer Vision. Springer, 572–578.
[239]
Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion 37 (2017), 98–125.
[240]
Soujanya Poria, Erik Cambria, Newton Howard, Guang-Bin Huang, and Amir Hussain. 2016. Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing 174 (2016), 50–59.
[241]
Soujanya Poria, Erik Cambria, Amir Hussain, and Guang-Bin Huang. 2015. Towards an intelligent framework for multimodal affective data analysis. Neural Networks 63 (2015), 104–116.
[242]
Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Amir Hussain. 2016. Convolutional MKL based multimodal emotion recognition and sentiment analysis. In 2016 IEEE 16th International Conference on Data Mining (ICDM). IEEE, 439–448.
[243]
Soujanya Poria, Alexander Gelbukh, Amir Hussain, Newton Howard, Dipankar Das, and Sivaji Bandyopadhyay. 2013. Enhanced SenticNet with affective labels for concept-based opinion mining. IEEE Intelligent Systems 28, 2 (2013), 31–38.
[244]
Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, and Rada Mihalcea. 2020. Beneath the Tip of the Iceberg: Current challenges and new directions in sentiment analysis research. arXiv preprint arXiv:2005.00357 (2020).
[245]
Soujanya Poria, Haiyun Peng, Amir Hussain, Newton Howard, and Erik Cambria. 2017. Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis. Neurocomputing 261 (2017), 217–230.
[246]
Daniel Preoţiuc-Pietro, H. Andrew Schwartz, Gregory Park, Johannes Eichstaedt, Margaret Kern, Lyle Ungar, and Elisabeth Shulman. 2016. Modelling valence and arousal in Facebook posts. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. 9–15.
[247]
Michal Ptaszynski, Rafal Rzepka, Kenji Araki, and Yoshio Momouchi. 2014. Automatically annotating a five-billion-word corpus of Japanese blogs for sentiment and affect analysis. Computer Speech & Language 28, 1 (2014), 38–55.
[248]
Rui Qiao, Chunmei Qing, Tong Zhang, Xiaofen Xing, and Xiangmin Xu. 2017. A novel deep-learning based framework for multi-subject emotion recognition. In 2017 4th International Conference on Information, Cybernetics and Computational Social Systems (ICCSS). IEEE, 181–185.
[249]
Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Linguistics 37, 1 (2011), 9–27. DOI:. arXiv:https://doi.org/10.1162/coli_a_00034
[250]
Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, and Jie Tang. 2018. DeepInf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2110–2119.
[251]
Xiaojun Quan, Qifan Wang, Ying Zhang, Luo Si, and Liu Wenyin. 2015. Latent discriminative models for social emotion detection with emotional dependency. ACM Transactions on Information Systems (TOIS) 34, 1 (2015), 1–19.
[252]
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 (2016).
[253]
Ranjeeta Rana and Vaishali Kolhe. 2015. Analysis of students emotion for Twitter data using Naïve Bayes and non linear support vector machine approachs. International Journal on Recent and Innovation Trends in Computing and Communication. ISSN (2015), 2321–8169.
[254]
Pramila Rani, Changchun Liu, Nilanjan Sarkar, and Eric Vanman. 2006. An empirical study of machine learning techniques for affect recognition in human–robot interaction. Pattern Analysis and Applications 9, 1 (2006), 58–69.
[255]
K. Sreenivasa Rao, V. K. Saroj, Sudhamay Maity, and Shashidhar G. Koolagudi. 2011. Recognition of emotions from video using neural network models. Expert Systems with Applications 38, 10 (2011), 13181–13185.
[256]
Yanghui Rao, Haoran Xie, Jun Li, Fengmei Jin, Fu Lee Wang, and Qing Li. 2016. Social emotion classification of short text via topic-level maximum entropy model. Information & Management 53, 8 (2016), 978–986.
[257]
Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics 7 (2019), 249–266.
[258]
Zhou Ren, Jingjing Meng, Junsong Yuan, and Zhengyou Zhang. 2011. Robust hand gesture recognition with Kinect sensor. In Proceedings of the 19th ACM International Conference on Multimedia. 759–760.
[259]
Douglas A. Reynolds and Richard C. Rose. 1995. Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Transactions on Speech and Audio Processing 3, 1 (1995), 72–83.
[260]
Sven Rill, Dirk Reinel, Jörg Scheidt, and Roberto V. Zicari. 2014. PoliTwi: Early detection of emerging political topics on Twitter and the impact on concept-level sentiment analysis. Knowledge-Based Systems 69 (2014), 24–33.
[261]
Mikel D. Rodriguez, Javed Ahmed, and Mubarak Shah. 2008. Action MACH a spatio-temporal maximum average correlation height filter for action recognition. In 2008 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1–8.
[262]
Wenge Rong, Yifan Nie, Yuanxin Ouyang, Baolin Peng, and Zhang Xiong. 2014. Auto-encoder based bagging architecture for sentiment analysis. Journal of Visual Languages & Computing 25, 6 (2014), 840–849.
[263]
Verónica Pérez Rosas, Rada Mihalcea, and Louis-Philippe Morency. 2013. Multimodal sentiment analysis of Spanish online videos. IEEE Intelligent Systems 28, 3 (2013), 38–45.
[264]
Tanmoy Roy, Tshilidzi Marwala, and Snehashish Chakraverty. 2020. A survey of classification techniques in speech emotion recognition. Mathematical Methods in Interdisciplinary Sciences (2020), 33–48.
[265]
Simon Ruffieux, Denis Lalanne, and Elena Mugellini. 2013. ChAirGest: A challenge for multimodal mid-air gesture recognition for close HCI. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction. 483–488.
[266]
Samir Rustamov, Elshan Mustafayev, and Mark A. Clements. 2013. Sentiment analysis using neuro-fuzzy and hidden Markov models of text. In 2013 Proceedings of IEEE Southeastcon. IEEE, 1–6.
[267]
Anwar Saeed, Ayoub Al-Hamadi, Robert Niese, and Moftah Elzobi. 2014. Frame-based facial expression recognition using geometrical features. Advances in Human-Computer Interaction (2014).
[268]
Kashfia Sailunaz, Manmeet Dhaliwal, Jon Rokne, and Reda Alhajj. 2018. Emotion detection from text and speech: A survey. Social Network Analysis and Mining 8, 1 (2018), 28.
[269]
David Sander, Didier Grandjean, and Klaus R. Scherer. 2005. A systems approach to appraisal mechanisms in emotion. Neural Networks 18, 4 (2005), 317–352.
[270]
Evangelos Sariyanidi, Hatice Gunes, and Andrea Cavallaro. 2014. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 6 (2014), 1113–1133.
[271]
Chandrima Sarkar, Sumit Bhatia, Arvind Agarwal, and Juan Li. 2014. Feature analysis for computational personality recognition using YouTube personality data set. In Proceedings of the 2014 ACM Multi Media on Workshop on Computational Personality Recognition. 11–14.
[272]
Kristina Schaaff and Tanja Schultz. 2009. Towards emotion recognition from electroencephalographic signals. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, 1–6.
[273]
Gerwin Schalk, Dennis J. McFarland, Thilo Hinterberger, Niels Birbaumer, and Jonathan R. Wolpaw. 2004. BCI2000: A general-purpose brain-computer interface (BCI) system. IEEE Transactions on Biomedical Engineering 51, 6 (2004), 1034–1043.
[274]
Jocelyn Scheirer, Raul Fernandez, Jonathan Klein, and Rosalind W. Picard. 2002. Frustrating the user on purpose: A step toward building an affective computer. Interacting with Computers 14, 2 (2002), 93–118.
[275]
Kim Schouten and Flavius Frasincar. 2015. Survey on aspect-level sentiment analysis. IEEE Transactions on Knowledge and Data Engineering 28, 3 (2015), 813–830.
[276]
Björn Schuller, Gerhard Rigoll, and Manfred Lang. 2004. Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture. In 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1. IEEE, I–577.
[277]
Nicu Sebe, Ira Cohen, Theo Gevers, and Thomas S. Huang. 2005. Multimodal approaches for emotion recognition: A survey. In Internet Imaging VI, Vol. 5670. International Society for Optics and Photonics, 56–67.
[278]
SeetaFace. 2020. SeetaFaceEngine. https://github.com/seetaface/SeetaFaceEngineAccessed: 2024-03-08.
[279]
Caifeng Shan, Shaogang Gong, and Peter W. McOwan. 2007. Beyond facial expressions: Learning human emotion from body gestures. In BMVC. 1–10.
[280]
Karan Sharma, Claudio Castellini, Egon L. van den Broek, Alin Albu-Schaeffer, and Friedhelm Schwenker. 2019. A dataset of continuous affect annotations and physiological signals for emotion analysis. Scientific Data 6, 1 (2019), 1–13.
[281]
Rakhee Sharma, Ngoc Le Tan, and Fatiha Sadat. 2018. Multimodal sentiment analysis using deep learning. In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 1475–1478.
[282]
J. I. Sheeba and K. Vivekanandan. 2014. A fuzzy logic based on sentiment classification. International Journal of Data Mining & Knowledge Management Process 4, 4 (2014), 27.
[283]
Masaki Shimizu, Takeharu Yoshizuka, and Hiroyuki Miyamoto. 2007. A gesture recognition system using stereo vision and arm model fitting. In International Congress Series, Vol. 1301. Elsevier, 89–92.
[284]
Dongmin Shin, Dongil Shin, and Dongkyoo Shin. 2017. Development of emotion recognition interface using complex EEG/ECG bio-signal for interactive contents. Multimedia Tools and Applications 76, 9 (2017), 11449–11470.
[285]
Shiv Naresh Shivhare, Shakun Garg, and Anitesh Mishra. 2015. EmotionFinder: Detecting emotion from blogs and textual documents. In International Conference on Computing, Communication & Automation. IEEE, 52–57.
[286]
Ali Hossam Shoeb. 2009. Application of Machine Learning to Epileptic Seizure Onset Detection and Treatment. Ph. D. Dissertation. Massachusetts Institute of Technology.
[287]
Lin Shu, Jinyan Xie, Mingyue Yang, Ziyi Li, Zhenqi Li, Dan Liao, Xiangmin Xu, and Xinyi Yang. 2018. A review of emotion recognition using physiological signals. Sensors 18, 7 (2018), 2074.
[288]
Lei Shu, Hu Xu, and Bing Liu. 2017. Lifelong learning CRF for supervised aspect extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Association for Computational Linguistics. 148–154. DOI:
[289]
Behjat Siddiquie, Dave Chisholm, and Ajay Divakaran. 2015. Exploiting multimodal affect and semantics to identify politically persuasive web videos. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. 203–210.
[290]
Leonid Sigal, Alexandru O. Balan, and Michael J. Black. 2010. HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. International Journal of Computer Vision 87, 1-2 (2010), 4.
[291]
Marjan Sikandar. 2014. A survey for multimodal sentiment analysis methods. International Journal Computer Technology & Applications 5, 4 (2014), 1470–1476.
[292]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[293]
Nikhil Kumar Singh, Deepak Singh Tomar, and Arun Kumar Sangaiah. 2020. Sentiment analysis: A review and comparative analysis over social media. Journal of Ambient Intelligence and Humanized Computing 11, 1 (2020), 97–117.
[294]
Shivendra Singh and Shajulin Benedict. 2019. Indian semi-acted facial expression (iSAFE) dataset for human emotions recognition. In International Symposium on Signal Processing and Intelligent Recognition Systems. Springer, 150–162.
[295]
Ian Sneddon, Margaret McRorie, Gary McKeown, and Jennifer Hanratty. 2011. The Belfast induced natural emotion database. IEEE Transactions on Affective Computing 3, 1 (2011), 32–41.
[296]
Mohammad Soleymani, David Garcia, Brendan Jou, Björn Schuller, Shih-Fu Chang, and Maja Pantic. 2017. A survey of multimodal sentiment analysis. Image and Vision Computing 65 (2017), 3–14.
[297]
Mohammad Soleymani, Maja Pantic, and Thierry Pun. 2011. Multimodal emotion recognition in response to videos. IEEE Transactions on Affective Computing 3, 2 (2011), 211–223.
[298]
Mingli Song, Mingyu You, Na Li, and Chun Chen. 2008. A robust multimodal approach for emotion recognition. Neurocomputing 71, 10-12 (2008), 1913–1920.
[299]
Yale Song, David Demirdjian, and Randall Davis. 2011. Tracking body and hands for gesture recognition: NATOPS aircraft handling signals database. In Face and Gesture 2011. IEEE, 500–506.
[300]
Tommaso Soru, Edgard Marx, Diego Moussallem, Gustavo Publio, André Valdestilhas, Diego Esteves, and Ciro Baron Neto. 2017. SPARQL as a foreign language. arXiv preprint arXiv:1708.07624 (2017).
[301]
Robert Speer and Catherine Havasi. 2013. ConceptNet 5: A large semantic network for relational knowledge. In The People’s Web Meets NLP. Springer, 161–176.
[302]
Carlo Strapparava and Rada Mihalcea. 2008. Learning to identify emotions in text. In Proceedings of the 2008 ACM Symposium on Applied Computing. 1556–1560.
[303]
Carlo Strapparava and Alessandro Valitutti. 2004. WordNet affect: An affective extension of WordNet. In LREC, Vol. 4. Citeseer, 40.
[304]
Boyuan Sun, Qiang Ma, Shanfeng Zhang, Kebin Liu, and Yunhao Liu. 2017. iSelf: Towards cold-start emotion labeling using transfer learning with smartphones. ACM Transactions on Sensor Networks (TOSN) 13, 4 (2017), 1–22.
[305]
Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2018. Improving machine reading comprehension with general reading strategies. arXiv preprint arXiv:1810.13441 (2018).
[306]
Simon Šuster and Walter Daelemans. 2018. CliCR: A dataset of clinical case reports for machine reading comprehension. arXiv preprint arXiv:1803.09720 (2018).
[307]
Kevin T. Sweeney, Hasan Ayaz, Tomás E. Ward, Meltem Izzetoglu, Seán F. McLoone, and Banu Onaral. 2012. A methodology for validating artifact removal techniques for physiological signals. IEEE Transactions on Information Technology in Biomedicine 16, 5 (2012), 918–926.
[308]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2818–2826.
[309]
E. Ke Tang, Ponnuthurai N. Suganthan, Xin Yao, and A. Kai Qin. 2005. Linear dimensionality reduction using relevance weighted LDA. Pattern Recognition 38, 4 (2005), 485–493.
[310]
Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2017. Dyadic memory networks for aspect-based sentiment analysis. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 107–116.
[311]
Graham W. Taylor, Rob Fergus, Yann LeCun, and Christoph Bregler. 2010. Convolutional learning of spatio-temporal features. In European Conference on Computer Vision. Springer, 140–153.
[312]
Moritz Tenorth, Jan Bandouch, and Michael Beetz. 2009. The TUM kitchen data set of everyday manipulation activities for motion tracking and action recognition. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 1089–1096.
[313]
Pratik Thakor and Sreela Sasi. 2015. Ontology-based sentiment analysis process for social media content. In INNS Conference on Big Data. 199–207.
[314]
Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment strength detection in short informal text. Journal of the American Society for Information Science and Technology 61, 12 (2010), 2544–2558.
[315]
Martin Thoma. 2017. The HASYv2 dataset. arXiv preprint arXiv:1701.08380 (2017).
[316]
Du Tran and Alexander Sorokin. 2008. Human activity recognition with metric learning. In European Conference on Computer Vision. Springer, 548–561.
[317]
Vaibhav Tripathi, Aditya Joshi, and Pushpak Bhattacharyya. 2016. Emotion analysis from text: A survey. Center for Indian Language Technology Surveys 11, 8 (2016), 66–69.
[318]
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. NewsQA: A machine comprehension dataset. arXiv preprint arXiv:1611.09830 (2016).
[319]
Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. ICWSM-A great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews. In ICWSM. Washington, DC, 162–169.
[320]
Orizu Udochukwu and Yulan He. 2015. A rule-based approach to implicit emotion detection in text. In International Conference on Applications of Natural Language to Information Systems. Springer, 197–203.
[321]
Michel Valstar and Maja Pantic. 2010. Induced disgust, happiness and surprise: An addition to the MMI facial expression database. In Proc. 3rd Intern. Workshop on EMOTION (Satellite of LREC): Corpora for Research on Emotion and Affect. 65.
[322]
Annamária R. Várkonyi-Kóczy and Balázs Tusor. 2011. Human–computer interaction for smart environment applications using fuzzy hand posture and gesture models. IEEE Transactions on Instrumentation and Measurement 60, 5 (2011), 1505–1514.
[323]
Ashok Veeraraghavan, Rama Chellappa, and Amit K. Roy-Chowdhury. 2006. The function space of an activity. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), Vol. 1. IEEE, 959–968.
[324]
Gyanendra K. Verma and Uma Shanker Tiwary. 2014. Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals. NeuroImage 102 (2014), 162–172.
[325]
Dimitrios Ververidis and Constantine Kotropoulos. 2006. Emotional speech recognition: Resources, features, and methods. Speech Communication 48, 9 (2006), 1162–1181.
[326]
Jose Vicente, Robbert Zusterzeel, Lars Johannesen, Roberto Ochoa-Jimenez, Jay W. Mason, Carlos Sanabria, Sarah Kemp, Philip T. Sager, Vikram Patel, Murali K. Matta, Jiang Liu, Jeffry Florian, Christine Garnett, Norman Stockbridge, and David G. Strauss. 2019. Assessment of multi-ion channel block in a phase i randomized study design: Results of the Ci PA phase i ECG biomarker validation study. Clinical Pharmacology & Therapeutics 105, 4 (2019), 943–953.
[327]
David Vilares, Carlos Gómez-Rodríguez, and Miguel A. Alonso. 2017. Universal, unsupervised (rule-based), uncovered sentiment analysis. Knowledge-Based Systems 118 (2017), 45–55.
[328]
Da-Han Wang, Cheng-Lin Liu, Jin-Lun Yu, and Xiang-Dong Zhou. 2009. CASIA-OLHWDB1: A database of online handwritten Chinese characters. In 2009 10th International Conference on Document Analysis and Recognition. IEEE, 1206–1210.
[329]
Gang Wang, Zhu Zhang, Jianshan Sun, Shanlin Yang, and Catherine A. Larson. 2015. POS-RS: A random subspace method for sentiment classification based on part-of-speech analysis. Information Processing & Management 51, 4 (2015), 458–479.
[330]
Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, and Eric P. Xing. 2017. Select-additive learning: Improving generalization in multimodal sentiment analysis. In 2017 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 949–954.
[331]
Hongwei Wang, Fuzheng Zhang, Min Hou, Xing Xie, Minyi Guo, and Qi Liu. 2018. SHINE: Signed heterogeneous information network embedding for sentiment link prediction. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. 592–600.
[332]
Jeen-Shing Wang, Che-Wei Lin, and Ya-Ting C. Yang. 2013. A k-nearest-neighbor classifier with heart rate variability feature-based transformation algorithm for driving stress recognition. Neurocomputing 116 (2013), 136–143.
[333]
Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 606–615.
[334]
Yujing Wang and Jianlin Mo. 2013. Emotion feature selection from physiological signals using tabu search. In 2013 25th Chinese Control and Decision Conference (CCDC). IEEE, 3148–3150.
[335]
Mayur Wankhade, Annavarapu Chandra Sekhara Rao, and Chaitanya Kulkarni. 2022. A survey on sentiment analysis methods, applications, and challenges. Artificial Intelligence Review (2022), 1–50.
[336]
Amy Beth Warriner, Victor Kuperman, and Marc Brysbaert. 2013. Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior Research Methods 45, 4 (2013), 1191–1207.
[337]
Daniel Weinland, Remi Ronfard, and Edmond Boyer. 2006. Free viewpoint action recognition using motion history volumes. Computer Vision and Image Understanding 104, 2-3 (2006), 249–257.
[338]
Wanhui Wen, Guangyuan Liu, Nanpu Cheng, Jie Wei, Pengchao Shangguan, and Wenjin Huang. 2014. Emotion recognition based on multi-variant correlation of physiological signals. IEEE Transactions on Affective Computing 5, 2 (2014), 126–140.
[339]
Michael Wiegand and Josef Ruppenhofer. 2015. Opinion holder and target extraction based on the induction of verbal categories. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning. 215–225.
[340]
L. Wikarsa and S. N. Thahir. 2015. A Text Mining Application of Emotion Classifications of Twitter’s Users Using Naïve Bayes Method International Conference on Wireless & Telematics. In 1st International Conference on Wireless and Telematics (ICWT), IEEE, 1–6.
[341]
Michael Wojatzki, Eugen Ruppert, Sarah Holschneider, Torsten Zesch, and Chris Biemann. 2017. GermEval 2017: Shared task on aspect-based sentiment in social media customer feedback. Proceedings of the GermEval (2017), 1–12.
[342]
Martin Wöllmer, Felix Weninger, Tobias Knaup, Björn Schuller, Congkai Sun, Kenji Sagae, and Louis-Philippe Morency. 2013. YouTube movie reviews: Sentiment analysis in an audio-visual context. IEEE Intelligent Systems 28, 3 (2013), 46–53.
[343]
Chi-En Wu and Richard Tzong-Han Tsai. 2014. Using relation selection to improve value propagation in a ConceptNet-based sentiment dictionary. Knowledge-Based Systems 69 (2014), 100–107.
[344]
Haoran Xie, Xiaodong Li, Tao Wang, Raymond Y. K. Lau, Tak-Lam Wong, Li Chen, Fu Lee Wang, and Qing Li. 2016. Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy. Information Processing & Management 52, 1 (2016), 61–72.
[345]
Lei Xu and L. Xu. 1992. A. Krzyzak, and C. Y. Suen. Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Transactions on Systems, Man, and Cybernetics 22, 3 (1992), 418–435.
[346]
Toshihiko Yamasaki, Yusuke Fukushima, Ryosuke Furuta, Litian Sun, Kiyoharu Aizawa, and Danushka Bollegala. 2015. Prediction of user ratings of oral presentations using label relations. In Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia. 33–38.
[347]
Gongjun Yan, Wu He, Jiancheng Shen, and Chuanyi Tang. 2014. A bilingual approach for conducting Chinese and English social media sentiment analysis. Computer Networks 75 (2014), 491–503.
[348]
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems. 5753–5763.
[349]
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 (2018).
[350]
Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. 129–136.
[351]
Amir Zadeh. 2015. Micro-opinion sentiment intensity analysis and summarization in online videos. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. 587–591.
[352]
Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, 1103–1114. DOI:
[353]
Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Memory fusion network for multi-view sequential learning. arXiv preprint arXiv:1802.00927 (2018).
[354]
Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, and Louis-Philippe Morency. 2018. Multi-attention recurrent network for human communication comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, 32 1 (2018).
[355]
Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis-Philippe Morency. 2016. MOSI: Multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259 (2016).
[356]
Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis-Philippe Morency. 2016. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems 31, 6 (2016), 82–88.
[357]
AmirAli Bagher Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Multimodal language analysis in the wild: CMU-MOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2236–2246.
[358]
Stefanos Zafeiriou, Dimitrios Kollias, Mihalis A. Nicolaou, Athanasios Papaioannou, Guoying Zhao, and Irene Kotsia. 2017. Aff-Wild: Valence and arousal ’in-the-wild’ challenge. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 34–41.
[359]
Zhihong Zeng, Maja Pantic, Glenn I. Roisman, and Thomas S. Huang. 2008. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 1 (2008), 39–58.
[360]
Jianhai Zhang, Ming Chen, Sanqing Hu, Yu Cao, and Robert Kozma. 2016. PNN for EEG-based emotion recognition. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 002319–002323.
[361]
Min-Ling Zhang. 2010. A k-nearest neighbor based multi-instance multi-label learning algorithm. In 2010 22nd IEEE International Conference on Tools with Artificial Intelligence, Vol. 2. IEEE, 207–212.
[362]
Shunxiang Zhang, Zhongliang Wei, Yin Wang, and Tao Liao. 2018. Sentiment analysis of Chinese micro-blog text based on extended sentiment dictionary. Future Generation Computer Systems 81 (2018), 395–403.
[363]
Yifan Zhang, Congqi Cao, Jian Cheng, and Hanqing Lu. 2018. EgoGesture: A new dataset and benchmark for egocentric hand gesture recognition. IEEE Transactions on Multimedia 20, 5 (2018), 1038–1050.
[364]
Guoying Zhao and Matti Pietikainen. 2007. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 6 (2007), 915–928.
[365]
Deyu Zhou, Xuan Zhang, Yin Zhou, Quan Zhao, and Xin Geng. 2016. Emotion distribution learning from texts. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 638–647.
[366]
Song Chun Zhu and Alan L. Yuille. 1996. FORMS: A flexible object recognition and modelling system. International Journal of Computer Vision 20, 3 (1996), 187–212.
[367]
Cong Zong and Mohamed Chetouani. 2009. Hilbert-Huang transform based physiological signals analysis for emotion recognition. In 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). IEEE, 334–339.
[368]
Igor Zyma, Sergii Tukaev, Ivan Seleznov, Ken Kiyono, Anton Popov, Mariia Chernykh, and Oleksii Shpenkov. 2019. Electroencephalograms during mental arithmetic task performance. Data 4, 1 (2019), 14.

Cited By

View all
  • (2025)Affective knowledge assisted bi-directional learning for Multi-modal Aspect-based Sentiment AnalysisComputer Speech & Language10.1016/j.csl.2024.10175591(101755)Online publication date: Apr-2025
  • (2025)Temporal text-guided feedback-based progressive fusion network for multimodal sentiment analysisAlexandria Engineering Journal10.1016/j.aej.2024.12.117116(699-709)Online publication date: Mar-2025
  • (2025)Multi-level language interaction transformer for multimodal sentiment analysisJournal of Intelligent Information Systems10.1007/s10844-025-00923-xOnline publication date: 5-Feb-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 56, Issue 9
September 2024
980 pages
EISSN:1557-7341
DOI:10.1145/3613649
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 April 2024
Online AM: 11 March 2024
Accepted: 07 March 2024
Revised: 03 February 2024
Received: 21 April 2023
Published in CSUR Volume 56, Issue 9

Check for updates

Author Tags

  1. Multimodal sentiment analysis
  2. sentiment classifier
  3. machine learning
  4. emotion detection
  5. modelling techniques

Qualifiers

  • Survey

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,265
  • Downloads (Last 6 weeks)204
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Affective knowledge assisted bi-directional learning for Multi-modal Aspect-based Sentiment AnalysisComputer Speech & Language10.1016/j.csl.2024.10175591(101755)Online publication date: Apr-2025
  • (2025)Temporal text-guided feedback-based progressive fusion network for multimodal sentiment analysisAlexandria Engineering Journal10.1016/j.aej.2024.12.117116(699-709)Online publication date: Mar-2025
  • (2025)Multi-level language interaction transformer for multimodal sentiment analysisJournal of Intelligent Information Systems10.1007/s10844-025-00923-xOnline publication date: 5-Feb-2025
  • (2024)Empowering Retail Dual Transformer-Based Profound Product Recommendation Using Multi-Model ReviewJournal of Organizational and End User Computing10.4018/JOEUC.35800236:1(1-23)Online publication date: 7-Nov-2024
  • (2024)AMTN: Attention-Enhanced Multimodal Temporal Network for Humor DetectionProceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and Humor10.1145/3689062.3689375(65-69)Online publication date: 28-Oct-2024
  • (2024)A Multimodal Unsupervised Clustering Model for Semantic Analysis Based on View Augmentation and Dynamic High-Quality Sample Selection2024 12th International Conference on Information Systems and Computing Technology (ISCTech)10.1109/ISCTech63666.2024.10845393(1-5)Online publication date: 8-Nov-2024
  • (2024)Research on Emotion Analysis of Multimodal Learning Supported by Deep Learning2024 IEEE 7th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE)10.1109/AUTEEE62881.2024.10869743(476-480)Online publication date: 27-Dec-2024
  • (2024)Cross-Domain Sentiment Classification with Mere Contrastive Learning and Improved Method2024 3rd International Conference on Artificial Intelligence and Computer Information Technology (AICIT)10.1109/AICIT62434.2024.10730527(1-10)Online publication date: 20-Sep-2024
  • (2024)The Construction of a Digital Dissemination Platform for the Intangible Cultural Heritage Using Convolutional Neural Network ModelsHeliyon10.1016/j.heliyon.2024.e40986(e40986)Online publication date: Dec-2024
  • (2024)UEFN: Efficient uncertainty estimation fusion network for reliable multimodal sentiment analysisApplied Intelligence10.1007/s10489-024-06113-655:3Online publication date: 16-Dec-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media