Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification

نویسندگانIsraa K. Salman Al-Tameemi-Mohammad-Reza Feizi-Derakhshi-Saeid Pashazadeh-Mohammad Asadpour
نشریهComputers, Materials and Continua
ارائه به نام دانشگاهUniversity of Tabriz
شماره صفحات2145-2177
شماره سریال2
شماره مجلد76
نوع مقالهFull Paper
تاریخ انتشار2024-07-17
رتبه نشریهISI
نوع نشریهچاپی
کشور محل چاپایالات متحدهٔ امریکا

چکیده مقاله

Multimodal Sentiment Analysis (SA) is gaining popularity due to its broad application potential. The existing studies have focused on the SA of single modalities, such as texts or photos, posing challenges in effectively handling social media data with multiple modalities. Moreover, most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations, leading to unsatisfactory sentiment classification results. Motivated by this, we propose a new visual-textual sentiment classification model named Multi-Model Fusion (MMF), which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content. The proposed model comprises three deep neural networks. Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data. Thus, more discriminative features are gathered for accurate sentiment classification. Then, a multichannel joint fusion model with a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification. Finally, the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model. An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model (LIME) to ensure the model’s explainability and resilience. The proposed MMF model has been tested on four real-world sentiment datasets, achieving (99.78%) accuracy on Binary_Getty (BG), (99.12%) on Binary_iStock (BIS), (95.70%) on Twitter, and (79.06%) on the Multi-View Sentiment Analysis (MVSA) dataset. These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.

لینک ثابت مقاله

tags: Sentiment analysis, multimodal classification, deep learning, joint fusion, decision fusion, interpretability