Authors | Israa Khalaf Salman Al-Tameemi-Mohammad-Reza Feizi-Derakhshi-Saeid Pashazadeh-Mohammad Asadpour |
---|---|
Journal | Journal of Computational Social Science |
Presented by | University of Tabriz |
Volume number | 2024 |
Paper Type | Full Paper |
Published At | 2024/09/08 |
Journal Grade | ISI (WOS) |
Journal Type | Typographic |
Journal Country | Singapore |
Abstract
Social media networks have become a significant aspect of people’s lives, serving as a platform for their ideas, opinions and emotions. Consequently, automated sentiment analysis (SA) is critical for recognising people’s feelings in ways other information sources cannot. The analysis of these feelings revealed various applications, including brand evaluations, YouTube film reviews and healthcare applications. As social media continues to develop, people publish vast quantities of information in various formats, like text, pictures, audio, and video. Thus, traditional SA algorithms have become limited, as they do not consider the expressiveness of other modalities. By including such characteristics from various material sources, these multimodal data streams provide new opportunities for optimising the expected results beyond text-based SA. Our study focuses on the forefront field of multimodal SA, which examines visual and textual data posted on social media networks. Many people are more likely to utilise this information to express themselves on these platforms. To serve as a resource for academics in this rapidly growing field, we introduce a comprehensive overview of textual and visual SA, including data pre-processing, feature extraction techniques, sentiment benchmark datasets, and the efficacy of multiple classification methodologies suited to each field. We also provide a brief introduction of the most frequently utilised data fusion strategies and a summary of existing research on visual–textual SA. Finally, we highlight the most significant challenges and investigate several important sentiment applications.