Mohammad Majid Akhtar , Navid Shadman Bhuiyan , Rahat Masood , Muhammad Ikram , Salil S. Kanhere
{"title":"BotSSCL: Social Bot Detection with Self-Supervised Contrastive Learning","authors":"Mohammad Majid Akhtar , Navid Shadman Bhuiyan , Rahat Masood , Muhammad Ikram , Salil S. Kanhere","doi":"10.1016/j.osnem.2025.100318","DOIUrl":null,"url":null,"abstract":"<div><div>The detection of automated accounts, also known as “social bots”, has been an important concern for online social networks (OSNs). While several methods have been proposed for detecting social bots, significant research gaps remain. First, current models exhibit limitations in detecting sophisticated bots that aim to mimic genuine OSN users. Second, these methods often rely on simplistic profile features, which are susceptible to adversarial manipulation. In addition, these models lack generalizability, resulting in subpar performance when trained on one dataset and tested on another.</div><div>To address these challenges, we propose a framework for social <strong>Bot</strong> detection with <strong>S</strong>elf-<strong>S</strong>upervised <strong>C</strong>ontrastive <strong>L</strong>earning (BotSSCL). Our framework leverages contrastive learning to distinguish between social bots and humans in the embedding space to improve linear separability. The high-level representations derived by BotSSCL enhance its resilience to variations in data distribution and ensure generalizability. We evaluate BotSSCL’s robustness against adversarial attempts to manipulate bot accounts to evade detection. Experiments on two datasets featuring sophisticated bots demonstrate that BotSSCL outperforms other supervised, unsupervised, and self-supervised baseline methods. We achieve <span><math><mrow><mo>≈</mo><mn>6</mn><mtext>%</mtext></mrow></math></span> and <span><math><mrow><mo>≈</mo><mn>8</mn><mtext>%</mtext></mrow></math></span> higher (F1) performance than SOTA on both datasets. In addition, BotSSCL also achieves 67% F1 when trained on one dataset and tested with another, demonstrating its generalizability under cross-botnet evaluation. Lastly, under adversarial evasion attack, BotSSCL shows increased complexity for the adversary and only allows 4% success to the adversary in evading detection. The code is available at <span><span>https://github.com/code4thispaper/BotSSCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100318"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Social Networks and Media","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468696425000199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
The detection of automated accounts, also known as “social bots”, has been an important concern for online social networks (OSNs). While several methods have been proposed for detecting social bots, significant research gaps remain. First, current models exhibit limitations in detecting sophisticated bots that aim to mimic genuine OSN users. Second, these methods often rely on simplistic profile features, which are susceptible to adversarial manipulation. In addition, these models lack generalizability, resulting in subpar performance when trained on one dataset and tested on another.
To address these challenges, we propose a framework for social Bot detection with Self-Supervised Contrastive Learning (BotSSCL). Our framework leverages contrastive learning to distinguish between social bots and humans in the embedding space to improve linear separability. The high-level representations derived by BotSSCL enhance its resilience to variations in data distribution and ensure generalizability. We evaluate BotSSCL’s robustness against adversarial attempts to manipulate bot accounts to evade detection. Experiments on two datasets featuring sophisticated bots demonstrate that BotSSCL outperforms other supervised, unsupervised, and self-supervised baseline methods. We achieve and higher (F1) performance than SOTA on both datasets. In addition, BotSSCL also achieves 67% F1 when trained on one dataset and tested with another, demonstrating its generalizability under cross-botnet evaluation. Lastly, under adversarial evasion attack, BotSSCL shows increased complexity for the adversary and only allows 4% success to the adversary in evading detection. The code is available at https://github.com/code4thispaper/BotSSCL.