{"title":"AI can be cyberbullying perpetrators: Investigating individuals’ perceptions and attitudes towards AI-generated cyberbullying","authors":"Weiping Pei , Fangzhou Wang , Yi Ting Chua","doi":"10.1016/j.techsoc.2025.103089","DOIUrl":null,"url":null,"abstract":"<div><div>Cyberbullying is a critical social problem that can cause significant psychological harm, particularly to vulnerable individuals. While Artificial Intelligence (AI) is increasingly leveraged to combat cyberbullying, its misuse to generate harmful content raises new concerns. This study examines human perception of AI-generated cyberbullying messages and their potential psychological impact. Using large language models (LLMs), we generated cyberbullying messages across three categories (sexism, racism, and abuse) and conducted a user study (n = 363), where participants engaged with hypothetical social media scenarios. Findings reveal that AI-generated messages can be just as or even more harmful than human-written ones in terms of participants’ comfort levels, perceived harm, and severity. Additionally, AI-generated messages were almost indistinguishable from human-written ones, with many participants misidentifying AI-generated messages as human-written. Furthermore, participants with prior experience using AI tools consistently demonstrated higher accuracy in identification, while their attitudes towards online harm significantly influenced their comfort levels. This study emphasizes the urgent need for robust mitigation strategies to counter AI-generated harmful content, ensuring that AI technologies are deployed responsibly and do not exacerbate online harm.</div></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":"84 ","pages":"Article 103089"},"PeriodicalIF":12.5000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X25002799","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
引用次数: 0
Abstract
Cyberbullying is a critical social problem that can cause significant psychological harm, particularly to vulnerable individuals. While Artificial Intelligence (AI) is increasingly leveraged to combat cyberbullying, its misuse to generate harmful content raises new concerns. This study examines human perception of AI-generated cyberbullying messages and their potential psychological impact. Using large language models (LLMs), we generated cyberbullying messages across three categories (sexism, racism, and abuse) and conducted a user study (n = 363), where participants engaged with hypothetical social media scenarios. Findings reveal that AI-generated messages can be just as or even more harmful than human-written ones in terms of participants’ comfort levels, perceived harm, and severity. Additionally, AI-generated messages were almost indistinguishable from human-written ones, with many participants misidentifying AI-generated messages as human-written. Furthermore, participants with prior experience using AI tools consistently demonstrated higher accuracy in identification, while their attitudes towards online harm significantly influenced their comfort levels. This study emphasizes the urgent need for robust mitigation strategies to counter AI-generated harmful content, ensuring that AI technologies are deployed responsibly and do not exacerbate online harm.
期刊介绍:
Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.