Abstract:
Social platforms allow individuals to express their opinions and viewpoints using multiple information modes. The effective fusion of these various types of information c...Show MoreMetadata
Abstract:
Social platforms allow individuals to express their opinions and viewpoints using multiple information modes. The effective fusion of these various types of information can enhance the accuracy of predicting users’ emotional tendencies. However, existing multimodal sentiment analysis does not fully consider the emoticon information contained in the text and the semantic irrelevance between the text and the image, resulting in poor sentiment analysis. To address this problem, we propose an image-text multimodal emotion analysis model (ITMEA-FE) that incorporates emoji features and text features into feature vectors to enhance the utilization rate of features. The correlation between image information and text information is identified, reducing the influence of emoji information and image-text semantic irrelevance on sentiment analysis. Finally, sentiment analysis is performed through a network of multi-head attention mechanisms. Experimental results show that the proposed method achieves an accuracy rate of 75.32% and a Macro-F1 value of 75.11%, outperforming the benchmark model.
Published in: 2024 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD)
Date of Conference: 08-10 May 2024
Date Added to IEEE Xplore: 10 July 2024
ISBN Information: