{"title":"保持真实性:巨魔网络的社会足迹","authors":"Ori Swed, Sachith Dassanayaka, Dimitri Volchenkov","doi":"arxiv-2409.07720","DOIUrl":null,"url":null,"abstract":"In 2016, a network of social media accounts animated by Russian operatives\nattempted to divert political discourse within the American public around the\npresidential elections. This was a coordinated effort, part of a Russian-led\ncomplex information operation. Utilizing the anonymity and outreach of social\nmedia platforms Russian operatives created an online astroturf that is in\ndirect contact with regular Americans, promoting Russian agenda and goals. The\nelusiveness of this type of adversarial approach rendered security agencies\nhelpless, stressing the unique challenges this type of intervention presents.\nBuilding on existing scholarship on the functions within influence networks on\nsocial media, we suggest a new approach to map those types of operations. We\nargue that pretending to be legitimate social actors obliges the network to\nadhere to social expectations, leaving a social footprint. To test the\nrobustness of this social footprint we train artificial intelligence to\nidentify it and create a predictive model. We use Twitter data identified as\npart of the Russian influence network for training the artificial intelligence\nand to test the prediction. Our model attains 88% prediction accuracy for the\ntest set. Testing our prediction on two additional models results in 90.7% and\n90.5% accuracy, validating our model. The predictive and validation results\nsuggest that building a machine learning model around social functions within\nthe Russian influence network can be used to map its actors and functions.","PeriodicalId":501032,"journal":{"name":"arXiv - CS - Social and Information Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Keeping it Authentic: The Social Footprint of the Trolls Network\",\"authors\":\"Ori Swed, Sachith Dassanayaka, Dimitri Volchenkov\",\"doi\":\"arxiv-2409.07720\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In 2016, a network of social media accounts animated by Russian operatives\\nattempted to divert political discourse within the American public around the\\npresidential elections. This was a coordinated effort, part of a Russian-led\\ncomplex information operation. Utilizing the anonymity and outreach of social\\nmedia platforms Russian operatives created an online astroturf that is in\\ndirect contact with regular Americans, promoting Russian agenda and goals. The\\nelusiveness of this type of adversarial approach rendered security agencies\\nhelpless, stressing the unique challenges this type of intervention presents.\\nBuilding on existing scholarship on the functions within influence networks on\\nsocial media, we suggest a new approach to map those types of operations. We\\nargue that pretending to be legitimate social actors obliges the network to\\nadhere to social expectations, leaving a social footprint. To test the\\nrobustness of this social footprint we train artificial intelligence to\\nidentify it and create a predictive model. We use Twitter data identified as\\npart of the Russian influence network for training the artificial intelligence\\nand to test the prediction. Our model attains 88% prediction accuracy for the\\ntest set. Testing our prediction on two additional models results in 90.7% and\\n90.5% accuracy, validating our model. The predictive and validation results\\nsuggest that building a machine learning model around social functions within\\nthe Russian influence network can be used to map its actors and functions.\",\"PeriodicalId\":501032,\"journal\":{\"name\":\"arXiv - CS - Social and Information Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Social and Information Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07720\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Social and Information Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07720","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Keeping it Authentic: The Social Footprint of the Trolls Network
In 2016, a network of social media accounts animated by Russian operatives
attempted to divert political discourse within the American public around the
presidential elections. This was a coordinated effort, part of a Russian-led
complex information operation. Utilizing the anonymity and outreach of social
media platforms Russian operatives created an online astroturf that is in
direct contact with regular Americans, promoting Russian agenda and goals. The
elusiveness of this type of adversarial approach rendered security agencies
helpless, stressing the unique challenges this type of intervention presents.
Building on existing scholarship on the functions within influence networks on
social media, we suggest a new approach to map those types of operations. We
argue that pretending to be legitimate social actors obliges the network to
adhere to social expectations, leaving a social footprint. To test the
robustness of this social footprint we train artificial intelligence to
identify it and create a predictive model. We use Twitter data identified as
part of the Russian influence network for training the artificial intelligence
and to test the prediction. Our model attains 88% prediction accuracy for the
test set. Testing our prediction on two additional models results in 90.7% and
90.5% accuracy, validating our model. The predictive and validation results
suggest that building a machine learning model around social functions within
the Russian influence network can be used to map its actors and functions.