{"title":"1月6日在Twitter上:通过不健康的在线对话和情绪分析来衡量社会媒体对国会大厦骚乱的态度","authors":"Kovacs Erik-Robert, Cotfas Liviu-Adrian, Delcea Camelia","doi":"10.1080/24751839.2023.2262067","DOIUrl":null,"url":null,"abstract":"While social media can serve as public discussion forums of great benefit to democratic debate, discourse propagated through them can also stoke political polarization and partisanship. A particularly dramatic example is the January 6, 2021 incident in Washington D.C., when a group of protesters besieged the US Capitol, resulting in several deaths. The public reacted by posting messages on social media, discussing the actions of the participants. Aiming to understand their perspectives under the broad concept of unhealthy online conversation (i.e. bad faith argumentation, overly hostile or destructive discourse, or other behaviours that discourage engagement), we sample 1,300,000 Twitter posts taken from the #Election2020 dataset dating from January 2021. Using a fine-tuned XLNet model trained on the Unhealthy Comment Corpus (UCC) dataset, we label these texts as healthy or unhealthy, furthermore using a taxonomy of 7 unhealthy attributes. Using the NRCLex sentiment analysis lexicon, we also detect the emotional patterns associated with each attribute. We observe that these conversations contain accusatory language aimed at the ‘other side’, limiting engagement by defining others in terms they do not themselves use or identify with. We find evidence of three attribute clusters, in addition to sarcasm, a divergent attribute that we argue should be researched separately. We find that emotions identified from the text do not correlate with the attributes, the two approaches revealing complementary characteristics of online discourse. Using latent Dirichlet allocation (LDA), we identify topics discussed within the attribute-sentiment pairs, linking them to each other using similarity measures. The results we present aim to help social media stakeholders, government regulators, and the general public better understand the contents and the emotional profile of the debates arising on social media platforms, especially as they relate to the political realm.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"27 1","pages":"0"},"PeriodicalIF":2.7000,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"January 6th on Twitter: measuring social media attitudes towards the Capitol riot through unhealthy online conversation and sentiment analysis\",\"authors\":\"Kovacs Erik-Robert, Cotfas Liviu-Adrian, Delcea Camelia\",\"doi\":\"10.1080/24751839.2023.2262067\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While social media can serve as public discussion forums of great benefit to democratic debate, discourse propagated through them can also stoke political polarization and partisanship. A particularly dramatic example is the January 6, 2021 incident in Washington D.C., when a group of protesters besieged the US Capitol, resulting in several deaths. The public reacted by posting messages on social media, discussing the actions of the participants. Aiming to understand their perspectives under the broad concept of unhealthy online conversation (i.e. bad faith argumentation, overly hostile or destructive discourse, or other behaviours that discourage engagement), we sample 1,300,000 Twitter posts taken from the #Election2020 dataset dating from January 2021. Using a fine-tuned XLNet model trained on the Unhealthy Comment Corpus (UCC) dataset, we label these texts as healthy or unhealthy, furthermore using a taxonomy of 7 unhealthy attributes. Using the NRCLex sentiment analysis lexicon, we also detect the emotional patterns associated with each attribute. We observe that these conversations contain accusatory language aimed at the ‘other side’, limiting engagement by defining others in terms they do not themselves use or identify with. We find evidence of three attribute clusters, in addition to sarcasm, a divergent attribute that we argue should be researched separately. We find that emotions identified from the text do not correlate with the attributes, the two approaches revealing complementary characteristics of online discourse. Using latent Dirichlet allocation (LDA), we identify topics discussed within the attribute-sentiment pairs, linking them to each other using similarity measures. The results we present aim to help social media stakeholders, government regulators, and the general public better understand the contents and the emotional profile of the debates arising on social media platforms, especially as they relate to the political realm.\",\"PeriodicalId\":32180,\"journal\":{\"name\":\"Journal of Information and Telecommunication\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2023-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information and Telecommunication\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/24751839.2023.2262067\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information and Telecommunication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/24751839.2023.2262067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
January 6th on Twitter: measuring social media attitudes towards the Capitol riot through unhealthy online conversation and sentiment analysis
While social media can serve as public discussion forums of great benefit to democratic debate, discourse propagated through them can also stoke political polarization and partisanship. A particularly dramatic example is the January 6, 2021 incident in Washington D.C., when a group of protesters besieged the US Capitol, resulting in several deaths. The public reacted by posting messages on social media, discussing the actions of the participants. Aiming to understand their perspectives under the broad concept of unhealthy online conversation (i.e. bad faith argumentation, overly hostile or destructive discourse, or other behaviours that discourage engagement), we sample 1,300,000 Twitter posts taken from the #Election2020 dataset dating from January 2021. Using a fine-tuned XLNet model trained on the Unhealthy Comment Corpus (UCC) dataset, we label these texts as healthy or unhealthy, furthermore using a taxonomy of 7 unhealthy attributes. Using the NRCLex sentiment analysis lexicon, we also detect the emotional patterns associated with each attribute. We observe that these conversations contain accusatory language aimed at the ‘other side’, limiting engagement by defining others in terms they do not themselves use or identify with. We find evidence of three attribute clusters, in addition to sarcasm, a divergent attribute that we argue should be researched separately. We find that emotions identified from the text do not correlate with the attributes, the two approaches revealing complementary characteristics of online discourse. Using latent Dirichlet allocation (LDA), we identify topics discussed within the attribute-sentiment pairs, linking them to each other using similarity measures. The results we present aim to help social media stakeholders, government regulators, and the general public better understand the contents and the emotional profile of the debates arising on social media platforms, especially as they relate to the political realm.