Context-Aware Attentive Deep Learning for Enhanced Sentiment Analysis in Multimodal Social Media Data

Authors

  • Akash Verma Agra College, Agra, India Author

DOI:

https://doi.org/10.64758/zaenjg47

Keywords:

Sentiment Analysis, Multimodal Data, Deep Learning, Attention Mechanisms, Context Awareness, Social Media, Natural Language Processing, Feature Fusion, Emotion Recognition

Abstract

Sentiment analysis, the task of recognizing and classifying opinions contained in text, has undergone substantial growth with the application of deep learning. Nevertheless, its performance is usually compromised by its dependence on text data only, while ignoring the wealth of information captured in other modalities such as images and videos that are ubiquitous in social media. Furthermore, current methods usually lack the ability to capture properly the contextual subtleties contained in multimodal data. This work presents a new Context-Aware Attentive Deep Learning (CAADL) framework for improved sentiment analysis in multimodal social media. CAADL uses deep learning models with attention mechanisms to capture informative features from both text and visual modalities. Additionally, it makes use of contextual information using a hierarchical attention network that captures inter-modal and intra-modal interactions. The proposed framework is trained and tested on a large-scale multimodal sentiment analysis dataset. Experimental outcomes verify that CAADL strongly surpasses state-of-the-art baselines with regard to precision, F1-score, and accuracy, proving the significance of attention mechanisms and context awareness in multimodal sentiment analysis. The introduced framework offers an effective and strong solution for interpreting and understanding sentiments in the intricate and dynamic world of social media.

Published

2025-04-01