AI can be cyberbullying perpetrators: Investigating individuals’ perceptions and attitudes towards AI-generated cyberbullying

IF 12.5 1区 社会学 Q1 SOCIAL ISSUES
Weiping Pei , Fangzhou Wang , Yi Ting Chua
{"title":"AI can be cyberbullying perpetrators: Investigating individuals’ perceptions and attitudes towards AI-generated cyberbullying","authors":"Weiping Pei ,&nbsp;Fangzhou Wang ,&nbsp;Yi Ting Chua","doi":"10.1016/j.techsoc.2025.103089","DOIUrl":null,"url":null,"abstract":"<div><div>Cyberbullying is a critical social problem that can cause significant psychological harm, particularly to vulnerable individuals. While Artificial Intelligence (AI) is increasingly leveraged to combat cyberbullying, its misuse to generate harmful content raises new concerns. This study examines human perception of AI-generated cyberbullying messages and their potential psychological impact. Using large language models (LLMs), we generated cyberbullying messages across three categories (sexism, racism, and abuse) and conducted a user study (n = 363), where participants engaged with hypothetical social media scenarios. Findings reveal that AI-generated messages can be just as or even more harmful than human-written ones in terms of participants’ comfort levels, perceived harm, and severity. Additionally, AI-generated messages were almost indistinguishable from human-written ones, with many participants misidentifying AI-generated messages as human-written. Furthermore, participants with prior experience using AI tools consistently demonstrated higher accuracy in identification, while their attitudes towards online harm significantly influenced their comfort levels. This study emphasizes the urgent need for robust mitigation strategies to counter AI-generated harmful content, ensuring that AI technologies are deployed responsibly and do not exacerbate online harm.</div></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":"84 ","pages":"Article 103089"},"PeriodicalIF":12.5000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X25002799","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
引用次数: 0

Abstract

Cyberbullying is a critical social problem that can cause significant psychological harm, particularly to vulnerable individuals. While Artificial Intelligence (AI) is increasingly leveraged to combat cyberbullying, its misuse to generate harmful content raises new concerns. This study examines human perception of AI-generated cyberbullying messages and their potential psychological impact. Using large language models (LLMs), we generated cyberbullying messages across three categories (sexism, racism, and abuse) and conducted a user study (n = 363), where participants engaged with hypothetical social media scenarios. Findings reveal that AI-generated messages can be just as or even more harmful than human-written ones in terms of participants’ comfort levels, perceived harm, and severity. Additionally, AI-generated messages were almost indistinguishable from human-written ones, with many participants misidentifying AI-generated messages as human-written. Furthermore, participants with prior experience using AI tools consistently demonstrated higher accuracy in identification, while their attitudes towards online harm significantly influenced their comfort levels. This study emphasizes the urgent need for robust mitigation strategies to counter AI-generated harmful content, ensuring that AI technologies are deployed responsibly and do not exacerbate online harm.

Abstract Image

人工智能可能是网络欺凌的肇事者:调查个人对人工智能产生的网络欺凌的看法和态度
网络欺凌是一个严重的社会问题,会造成严重的心理伤害,尤其是对弱势群体。虽然人工智能(AI)越来越多地用于打击网络欺凌,但滥用它来产生有害内容引发了新的担忧。本研究探讨了人类对人工智能产生的网络欺凌信息的感知及其潜在的心理影响。使用大型语言模型(llm),我们生成了三类(性别歧视、种族主义和虐待)的网络欺凌信息,并进行了一项用户研究(n = 363),参与者参与了假设的社交媒体场景。研究结果显示,在参与者的舒适度、感知伤害和严重程度方面,人工智能生成的信息可能与人类编写的信息一样有害,甚至更有害。此外,人工智能生成的消息与人类编写的消息几乎无法区分,许多参与者将人工智能生成的消息误认为是人类编写的。此外,先前有使用人工智能工具经验的参与者在识别方面始终表现出更高的准确性,而他们对在线伤害的态度显著影响了他们的舒适度。本研究强调,迫切需要制定强有力的缓解战略,以应对人工智能产生的有害内容,确保负责任地部署人工智能技术,不会加剧在线危害。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
17.90
自引率
14.10%
发文量
316
审稿时长
60 days
期刊介绍: Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信