R. S. Linhares, José Luís da Silva Rosa, C. H. G. Ferreira, Fabricio Murai, G. Nobre, J. Almeida
{"title":"Uncovering Coordinated Communities on Twitter During the 2020 U.S. Election","authors":"R. S. Linhares, José Luís da Silva Rosa, C. H. G. Ferreira, Fabricio Murai, G. Nobre, J. Almeida","doi":"10.1109/ASONAM55673.2022.10068628","DOIUrl":null,"url":null,"abstract":"A large volume of content related to claims of election fraud, often associated with hate speech and extremism, was reported on Twitter during the 2020 US election, with evidence that coordinated efforts took place to promote such content on the platform. In response, Twitter announced the suspension of thousands of user accounts allegedly involved in such actions. Motivated by these events, we here propose a novel network-based approach to uncover evidence of coordination in a set of user interactions. Our approach is designed to address the challenges incurred by the often sheer volume of noisy edges in the network (i.e., edges that are unrelated to coordination) and the effects of data sampling. To that end, it exploits the joint use of two network backbone extraction techniques, namely Disparity Filter and Neighborhood Overlap, to reveal strongly tied groups of users (here referred to as communities) exhibiting repeatedly common behavior, consistent with coordination. We employ our strategy to a large dataset of tweets related to the aforementioned fraud claims, in which users were labeled as suspended, deleted or active, according to their accounts status after the election. Our findings reveal well-structured communities, with strong evidence of coordination to promote (i.e., retweet) the aforementioned fraud claims. Moreover, many of those communities are formed not only by suspended and deleted users, but also by users who, despite exhibiting very similar sharing patterns, remained active in the platform. This observation suggests that a significant number of users who were potentially involved in the coordination efforts went unnoticed by the platform, and possibly remained actively spreading this content on the system.","PeriodicalId":423113,"journal":{"name":"2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASONAM55673.2022.10068628","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
A large volume of content related to claims of election fraud, often associated with hate speech and extremism, was reported on Twitter during the 2020 US election, with evidence that coordinated efforts took place to promote such content on the platform. In response, Twitter announced the suspension of thousands of user accounts allegedly involved in such actions. Motivated by these events, we here propose a novel network-based approach to uncover evidence of coordination in a set of user interactions. Our approach is designed to address the challenges incurred by the often sheer volume of noisy edges in the network (i.e., edges that are unrelated to coordination) and the effects of data sampling. To that end, it exploits the joint use of two network backbone extraction techniques, namely Disparity Filter and Neighborhood Overlap, to reveal strongly tied groups of users (here referred to as communities) exhibiting repeatedly common behavior, consistent with coordination. We employ our strategy to a large dataset of tweets related to the aforementioned fraud claims, in which users were labeled as suspended, deleted or active, according to their accounts status after the election. Our findings reveal well-structured communities, with strong evidence of coordination to promote (i.e., retweet) the aforementioned fraud claims. Moreover, many of those communities are formed not only by suspended and deleted users, but also by users who, despite exhibiting very similar sharing patterns, remained active in the platform. This observation suggests that a significant number of users who were potentially involved in the coordination efforts went unnoticed by the platform, and possibly remained actively spreading this content on the system.