Multi-task detection of harmful content in code-mixed meme captions using large language models with zero-shot, few-shot, and fine-tuning approaches

IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
A.K. Indira Kumar , Gayathri Sthanusubramoniani , Deepa Gupta , Aarathi Rajagopalan Nair , Yousef Ajami Alotaibi , Mohammed Zakariah
{"title":"Multi-task detection of harmful content in code-mixed meme captions using large language models with zero-shot, few-shot, and fine-tuning approaches","authors":"A.K. Indira Kumar ,&nbsp;Gayathri Sthanusubramoniani ,&nbsp;Deepa Gupta ,&nbsp;Aarathi Rajagopalan Nair ,&nbsp;Yousef Ajami Alotaibi ,&nbsp;Mohammed Zakariah","doi":"10.1016/j.eij.2025.100683","DOIUrl":null,"url":null,"abstract":"<div><div>In today’s digital world, memes have become a common form of communication, shaping online conversations and reflecting social events. However, some memes can negatively impact people’s emotions, especially when they involve sensitive topics or mock certain groups or individuals. To address this issue, it is important to create a system that can identify and remove harmful memes before they cause further harm. Using Large Language Models for text classification in this system offers a promising approach, as these models are skilled at understanding complex language structures and recognizing patterns, including those in code-mixed language. This research focuses on evaluating how well different Large Language Models perform in identifying memes that promote cyberbullying. It covers tasks like cyberbullying detection, sentiment analysis, emotion recognition, sarcasm detection, and harmfulness evaluation. The results show significant improvements, with a 7.94% increase in accuracy for cyberbullying detection, a 2.68% improvement in harmfulness evaluation, and a 1.7% boost in sarcasm detection compared to previous top models. There is also a 1.07% improvement in emotion detection. These findings highlight the ability of Large Language Models to help tackle cyberbullying and create safer online spaces.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"30 ","pages":"Article 100683"},"PeriodicalIF":5.0000,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Egyptian Informatics Journal","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110866525000763","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In today’s digital world, memes have become a common form of communication, shaping online conversations and reflecting social events. However, some memes can negatively impact people’s emotions, especially when they involve sensitive topics or mock certain groups or individuals. To address this issue, it is important to create a system that can identify and remove harmful memes before they cause further harm. Using Large Language Models for text classification in this system offers a promising approach, as these models are skilled at understanding complex language structures and recognizing patterns, including those in code-mixed language. This research focuses on evaluating how well different Large Language Models perform in identifying memes that promote cyberbullying. It covers tasks like cyberbullying detection, sentiment analysis, emotion recognition, sarcasm detection, and harmfulness evaluation. The results show significant improvements, with a 7.94% increase in accuracy for cyberbullying detection, a 2.68% improvement in harmfulness evaluation, and a 1.7% boost in sarcasm detection compared to previous top models. There is also a 1.07% improvement in emotion detection. These findings highlight the ability of Large Language Models to help tackle cyberbullying and create safer online spaces.
使用带有零镜头、少镜头和微调方法的大型语言模型对代码混合模因标题中的有害内容进行多任务检测
在当今的数字世界中,表情包已经成为一种常见的交流形式,塑造了在线对话并反映了社会事件。然而,一些表情包会对人们的情绪产生负面影响,尤其是当它们涉及敏感话题或嘲笑某些群体或个人时。为了解决这个问题,重要的是创建一个系统,可以在有害模因造成进一步伤害之前识别和删除它们。在该系统中使用大型语言模型进行文本分类提供了一种很有前途的方法,因为这些模型擅长于理解复杂的语言结构和识别模式,包括代码混合语言中的模式。本研究的重点是评估不同的大型语言模型在识别促进网络欺凌的模因方面的表现。它涵盖了网络欺凌检测、情感分析、情感识别、讽刺检测和危害评估等任务。结果显示出显著的改进,与之前的顶级模型相比,网络欺凌检测的准确率提高了7.94%,危害评估的准确率提高了2.68%,讽刺检测的准确率提高了1.7%。在情绪检测方面也提高了1.07%。这些发现强调了大型语言模型在帮助解决网络欺凌和创造更安全的在线空间方面的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Egyptian Informatics Journal
Egyptian Informatics Journal Decision Sciences-Management Science and Operations Research
CiteScore
11.10
自引率
1.90%
发文量
59
审稿时长
110 days
期刊介绍: The Egyptian Informatics Journal is published by the Faculty of Computers and Artificial Intelligence, Cairo University. This Journal provides a forum for the state-of-the-art research and development in the fields of computing, including computer sciences, information technologies, information systems, operations research and decision support. Innovative and not-previously-published work in subjects covered by the Journal is encouraged to be submitted, whether from academic, research or commercial sources.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信