{"title":"BotBuster:使用混合专家的多平台僵尸检测","authors":"Lynnette Hui Xian Ng, Kathleen M. Carley","doi":"10.1609/icwsm.v17i1.22179","DOIUrl":null,"url":null,"abstract":"Despite rapid development, current bot detection models still face challenges in dealing with incomplete data and cross-platform applications. In this paper, we propose BotBuster, a social bot detector built with the concept of a mixture of experts approach. Each expert is trained to analyze a portion of account information, e.g. username, and are combined to estimate the probability that the account is a bot. Experiments on 10 Twitter datasets show that BotBuster outperforms popular bot-detection baselines (avg F1=73.54 vs avg F1=45.12). This is accompanied with F1=60.04 on a Reddit dataset and F1=60.92 on an external evaluation set. Further analysis shows that only 36 posts is required for a stable bot classification. Investigation shows that bot post features have changed across the years and can be difficult to differentiate from human features, making bot detection a difficult and ongoing problem.","PeriodicalId":338112,"journal":{"name":"Proceedings of the International AAAI Conference on Web and Social Media","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"BotBuster: Multi-Platform Bot Detection Using a Mixture of Experts\",\"authors\":\"Lynnette Hui Xian Ng, Kathleen M. Carley\",\"doi\":\"10.1609/icwsm.v17i1.22179\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite rapid development, current bot detection models still face challenges in dealing with incomplete data and cross-platform applications. In this paper, we propose BotBuster, a social bot detector built with the concept of a mixture of experts approach. Each expert is trained to analyze a portion of account information, e.g. username, and are combined to estimate the probability that the account is a bot. Experiments on 10 Twitter datasets show that BotBuster outperforms popular bot-detection baselines (avg F1=73.54 vs avg F1=45.12). This is accompanied with F1=60.04 on a Reddit dataset and F1=60.92 on an external evaluation set. Further analysis shows that only 36 posts is required for a stable bot classification. Investigation shows that bot post features have changed across the years and can be difficult to differentiate from human features, making bot detection a difficult and ongoing problem.\",\"PeriodicalId\":338112,\"journal\":{\"name\":\"Proceedings of the International AAAI Conference on Web and Social Media\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the International AAAI Conference on Web and Social Media\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1609/icwsm.v17i1.22179\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the International AAAI Conference on Web and Social Media","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/icwsm.v17i1.22179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
尽管发展迅速,但目前的机器人检测模型在处理不完整数据和跨平台应用方面仍然面临挑战。在本文中,我们提出BotBuster,一个基于混合专家方法的概念构建的社交机器人检测器。每个专家都经过培训,可以分析一部分帐户信息,例如用户名,并结合起来估计该帐户是机器人的概率。在10个Twitter数据集上的实验表明,BotBuster优于流行的机器人检测基线(avg F1=73.54 vs avg F1=45.12)。在Reddit数据集上F1=60.04,在外部评估集上F1=60.92。进一步分析表明,稳定的bot分类只需要36个帖子。调查显示,多年来,机器人帖子的特征已经发生了变化,很难与人类特征区分开来,这使得机器人检测成为一个困难且持续存在的问题。
BotBuster: Multi-Platform Bot Detection Using a Mixture of Experts
Despite rapid development, current bot detection models still face challenges in dealing with incomplete data and cross-platform applications. In this paper, we propose BotBuster, a social bot detector built with the concept of a mixture of experts approach. Each expert is trained to analyze a portion of account information, e.g. username, and are combined to estimate the probability that the account is a bot. Experiments on 10 Twitter datasets show that BotBuster outperforms popular bot-detection baselines (avg F1=73.54 vs avg F1=45.12). This is accompanied with F1=60.04 on a Reddit dataset and F1=60.92 on an external evaluation set. Further analysis shows that only 36 posts is required for a stable bot classification. Investigation shows that bot post features have changed across the years and can be difficult to differentiate from human features, making bot detection a difficult and ongoing problem.