Multimodal Text-Emoji Fusion for Enhanced Emotion Detection in Online Communication

Authors

  • Vishwash Singh NIET, NIMS University, Jaipur, India Author
  • Leszek Ziora CUT, Poland Author

DOI:

https://doi.org/10.64758/ng9bgg63

Keywords:

Emotion detection, multimodal fusion, emojis, sentiment analysis, sarcasm detection, hybrid deep learning, text classification, machine learning, online communication, computational linguistics

Abstract

This paper explores the integration of emoji analysis into text-based emotion detection, emphasizing the significance of multimodal fusion in online communication. With the increasing use of emojis as emotional cues, understanding their impact on sentiment classification is crucial. The study investigates five key areas: the effect of emoji usage on emotion detection accuracy, the role of emojis in differentiating supportive and contrastive sentiments, the impact of emoji context on sarcasm interpretation, the integration of emojis in hybrid deep learning frameworks, and the effectiveness of multimodal fusion techniques in enhancing emotion classification. Using a quantitative research approach, this study leverages the GoEmotions dataset to analyze the relationship between emoji usage and emotion detection performance. Findings demonstrate that incorporating emojis significantly improves classification accuracy, sentiment differentiation, and sarcasm interpretation. Additionally, hybrid frameworks integrating emojis enhance emotion detection capabilities, and multimodal fusion techniques improve classification performance. The research contributes to the growing field of emotion detection by highlighting the essential role of emojis in enriching sentiment analysis models. Future work should address dataset diversity and cultural factors to refine emotion detection frameworks further.

Downloads

Published

2025-01-25