International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Multi-Modal AI Frameworks: The Next Frontier in Combating Social Media Misinformation
ABSTRACT The rapid growth of social media has transformed how information is created and consumed, but it has also enabled the widespread dissemination of fake news. This misinformation poses a significant threat to democratic systems, public safety, and societal trust. Traditional fake news detection systems rely primarily on textual analysis using Natural Language Processing (NLP), which is no longer sufficient in the era of multi-modal misinformation involving images, videos, and contextual manipulation. This research proposes a multi-modal AI framework that integrates textual, visual, and social context analysis. The model combines RoBERTa embeddings for text understanding, CLIP-based visual analysis for image-text alignment, and Graph Neural Networks (GNNs) for analyzing information propagation. Experimental results demonstrate that multi-modal fusion significantly outperforms unimodal approaches, achieving accuracy levels up to 95–99%.
AUTHOR NAYAN VIJAY PATIL Dr. D.Y. Patil Arts, Commerce and Science College, Akurdi, Pune, Maharashtra, India
VOLUME 183
DOI DOI: 10.15680/IJIRCCE.2026.1404071
PDF pdf/71_Multi-Modal AI Frameworks The Next Frontier in Combating Social Media Misinformation.pdf
KEYWORDS
References [1] K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, “Fake News Detection on Social Media: A Data Mining Perspective,” ACM SIGKDD Explorations Newsletter, vol. 19, no. 1, pp. 22–36, 2017.
[2] X. Zhou and R. Zafarani, “A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities,” ACM Computing Surveys, vol. 53, no. 5, pp. 1–40, 2020.
[3] J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proc. NAACL-HLT, 2019.
[4] Y. Liu, M. Ott, N. Goyal, et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” arXiv preprint arXiv:1907.11692, 2019.
[5] A. Radford, J. W. Kim, C. Hallacy, et al., “Learning Transferable Visual Models From Natural Language Supervision,” in Proc. ICML, 2021.
[6] T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” in Proc. ICLR, 2017.
[7] W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive Representation Learning on Large Graphs,” in Advances in Neural Information Processing Systems (NeurIPS), 2017.
[8] M. Tan and Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Proc. ICML, 2019.
[9] C. Boididou, K. Andreadou, S. Papadopoulos, et al., “Verifying Multimedia Use at MediaEval 2015,” in MediaEval Benchmarking Initiative, 2015.
[10] K. Shu, D. Mahudeswaran, S. Wang, D. Lee, and H. Liu, “FakeNewsNet: A Data Repository with News Content, Social Context and Dynamic Information for Studying Fake News on Social Media,” Big Data, vol. 8, no. 3, pp. 171–188, 2020.
[11] S. Vosoughi, D. Roy, and S. Aral, “The Spread of True and False News Online,” Science, vol. 359, no. 6380, pp. 1146–1151, 2018.
[12] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA: MIT Press, 2016.
[13] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why Should I Trust You?: Explaining the Predictions of Any Classifier,” in Proc. KDD, 2016.
[14] Y. Zhang and A. A. Ghorbani, “An Overview of Online Fake News: Characterization, Detection, and Discussion,” Information Processing & Management, vol. 57, no. 2, 2020.
image
Copyright © IJIRCCE 2020.All right reserved