{"title":"Web3 vs Fediverse: A comparative analysis of DeSo and Mastodon as decentralised social media ecosystems","authors":"Terence Zhang , Aniket Mahanti , Ranesh Naha","doi":"10.1016/j.osnem.2025.100337","DOIUrl":"10.1016/j.osnem.2025.100337","url":null,"abstract":"<div><div>The rise of centralised social networks has consolidated power among a few major technology companies, raising critical concerns about privacy, censorship, and transparency. In response, decentralised alternatives, including Web3 platforms like Decentralised Social (DeSo) and Fediverse platforms such as Mastodon, have gained increasing attention. While prior research has explored individual aspects of decentralised networks, comparisons between Fediverse and Web3 platforms remain limited, and the unique dynamics of Web3 networks like DeSo are not well understood. This study provides the first in-depth study of DeSo, characterising user behaviour, discourse, and economic activities, and compares these with Mastodon and <span>memo.cash</span>. We collected over 3.1M posts from 13K users on DeSo and Mastodon, along with 11M DeSo on-chain transactions via public APIs. Our analysis reveals that while DeSo and Mastodon share similarities in passive content engagement, they differ in their use of URLs, hashtags, and community focus. DeSo is primarily oriented around Decentralised Finance (DeFi) topics, whereas Mastodon hosts diverse discussions with an emphasis on news and politics. Despite DeSo’s decentralised social graph, its transaction graph remains centralised, underscoring the need for further decentralisation in Web3 platforms. Additionally, while wealth inequality exists on DeSo, low transaction fees promote user participation irrespective of financial status. These findings provide new insights into the evolving landscape of decentralised social networks and highlight critical areas for future research and platform development.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"50 ","pages":"Article 100337"},"PeriodicalIF":2.9,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145268666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting, evaluating, and explaining top misinformation spreaders via archetypal user behavior","authors":"Enrico Verdolotti , Luca Luceri , Silvia Giordano","doi":"10.1016/j.osnem.2025.100336","DOIUrl":"10.1016/j.osnem.2025.100336","url":null,"abstract":"<div><div>The spread of misinformation on social networks poses a significant challenge to online communities and society at large. Not all users contribute equally to this phenomenon: a small number of highly effective individuals can exert outsized influence, amplifying false narratives and contributing to significant societal harm. This paper seeks to mitigate the spread of misinformation by enabling proactive interventions, identifying and ranking users according to key behavioral indicators associated with harmful content dissemination. We examine three user archetypes — <em>amplifiers</em>, <em>super-spreaders</em>, and <em>coordinated accounts</em> — each characterized by distinct behavioral patterns in the dissemination of misinformation. These are not mutually exclusive, and individual users may exhibit characteristics of multiple archetypes. We develop and evaluate several user ranking models, each aligned with a specific archetype, and find that <em>super-spreader</em> traits consistently dominate the top ranks among the most influential misinformation spreaders. As we move down the ranking, however, the interplay of multiple archetypes becomes more prominent. Additionally, we demonstrate the critical role of temporal dynamics in predictive performance, and introduce methods that reduce data requirements by minimizing the observation window needed for accurate forecasting. Finally, we demonstrate the utility and benefits of explainable AI (XAI) techniques, integrating multiple archetypal traits into a unified model to enhance interpretability and offer deeper insight into the key factors driving misinformation propagation. Our findings provide actionable tools for identifying potentially harmful users and guiding content moderation strategies, enabling platforms to monitor accounts of concern more effectively.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"50 ","pages":"Article 100336"},"PeriodicalIF":2.9,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The perils of stealthy data poisoning attacks in misogynistic content moderation","authors":"Syrine Enneifer, Federica Baccini, Federico Siciliano, Irene Amerini, Fabrizio Silvestri","doi":"10.1016/j.osnem.2025.100334","DOIUrl":"10.1016/j.osnem.2025.100334","url":null,"abstract":"<div><div>Moderating harmful content, such as misogynistic language, is essential to ensure safety and well-being in online spaces. To this end, text classification models have been used to detect toxic content, especially in communities that are known to promote violence and radicalization in the real world, such as the <em>incel</em> movement. However, these models remain vulnerable to targeted data poisoning attacks. In this work, we present a realistic targeted poisoning strategy in which an adversary aims at misclassifying specific misogynistic comments in order to evade detection. While prior approaches craft poisoned samples with explicit trigger phrases, our method relies exclusively on existing training data. In particular, we repurpose the concept of <em>opponents</em>, training points that negatively influence the prediction of a target test point, to identify poisoned points to be added to the training set, either in their original form or in a paraphrased variant. The effectiveness of the attack is then measured on several aspects: success rate, number of poisoned samples required, and preservation of the overall model performance. Our results on two different datasets show that only a small fraction of malicious inputs are possibly sufficient to undermine classification of a target sample, while leaving the model performance on non-target points virtually unaffected, revealing the stealthy nature of the attack. Finally, we show that the attack can be transferred across different models, thus highlighting its practical relevance in real-world scenarios. Overall, our work raises awareness on the vulnerability of powerful machine learning models to data poisoning attacks, and will possibly encourage the development of efficient defense and mitigation techniques to strengthen the security of automated moderation systems.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"50 ","pages":"Article 100334"},"PeriodicalIF":2.9,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145109455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Benedetti , Antonio Pellicani , Gianvito Pio , Michelangelo Ceci
{"title":"IMMENSE: Inductive Multi-perspective User Classification in Social Networks","authors":"Francesco Benedetti , Antonio Pellicani , Gianvito Pio , Michelangelo Ceci","doi":"10.1016/j.osnem.2025.100335","DOIUrl":"10.1016/j.osnem.2025.100335","url":null,"abstract":"<div><div>Online social networks increasingly expose people to users who propagate discriminatory, hateful, and violent content. Young users, in particular, are vulnerable to exposure to such content, which can have harmful psychological and social repercussions. Given the massive scale of today’s social networks, in terms of both published content and number of users, there is an urgent need for effective systems to aid Law Enforcement Agencies (LEAs) in identifying and addressing users that disseminate malicious content. In this work we introduce IMMENSE, a machine learning-based method for detecting malicious social network users. Our approach adopts a hybrid classification strategy that integrates three perspectives: the semantics of the users’ published content, their social relationships and their spatial information. Such contextual perspectives potentially enhance classification performance beyond text-only analysis. Importantly, IMMENSE employs an inductive learning approach, enabling it to classify previously unseen users or entire new networks without the need for costly and time-consuming model retraining procedures. Experiments carried out on a real-world Twitter/X dataset showed the superiority of IMMENSE against five state of the art competitors, confirming the benefits of its hybrid approach for effective deployment in social network monitoring systems.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"50 ","pages":"Article 100335"},"PeriodicalIF":2.9,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent nudging for truth: Mitigating rumor and misinformation in social networks with behavioral strategies","authors":"Indu V. , Sabu M. Thampi","doi":"10.1016/j.osnem.2025.100333","DOIUrl":"10.1016/j.osnem.2025.100333","url":null,"abstract":"<div><div>Social networks play a crucial role in disseminating information during emergencies and natural disasters, but they also facilitate the spread of rumors and misinformation, which can have adverse effects on society. Numerous false messages related to the COVID-19 pandemic circulated on social networks, causing unnecessary fear and anxiety, and leading to various mental health issues. Despite strict measures by social network providers and government authorities to curb fake news, many users continue to fall victim to misinformation. This highlights the need for novel approaches that incorporate user participation in mitigating rumors on social networks. Since users are the primary consumers and spreaders of information, their involvement is essential in maintaining information hygiene. We propose a novel approach based on nudging theory to motivate users to post or share only verified information on their social network profiles, thereby positively influencing their information-sharing behavior. Our approach utilizes three nudging strategies: Confront nudge, Reinforcement nudge, and Social Influence nudge. We have developed a Chrome browser plug-in for Twitter that prompts users to verify the authenticity of tweets and rate them before sharing. Additionally, user profiles receive a rating based on the average ratings of their posted tweets. The effectiveness of this mechanism was tested in a field study involving 125 Twitter users over one month. The results suggest that the proposed approach is a promising solution for limiting the propagation of rumors on social networks.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"49 ","pages":"Article 100333"},"PeriodicalIF":2.9,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144889996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WhatsApp tiplines and multilingual claims in the 2021 Indian assembly elections","authors":"Gautam Kishore Shahi , Scott A. Hale","doi":"10.1016/j.osnem.2025.100323","DOIUrl":"10.1016/j.osnem.2025.100323","url":null,"abstract":"<div><div>WhatsApp tiplines, first launched in 2019 to combat misinformation, enable users to interact with fact-checkers to verify misleading content. This study analyzes 580 unique claims (tips) from 451 users, covering both high-resource languages (English, Hindi) and a low-resource language (Telugu) during the 2021 Indian assembly elections using a mixed-method approach. We categorize the claims into three categories, election, COVID-19, and others, and observe variations across languages. We compare content similarity through frequent word analysis and clustering of neural sentence embeddings. We also investigate user overlap across languages and fact-checking organizations. We measure the average time required to debunk claims and inform tipline users. Results reveal similarities in claims across languages, with some users submitting tips in multiple languages to the same fact-checkers. Fact-checkers generally require a couple of days to debunk a new claim and share the results with users. Notably, no user submits claims to multiple fact-checking organizations, indicating that each organization maintains a unique audience. We provide practical recommendations for using tiplines during elections with ethical consideration of user information.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"49 ","pages":"Article 100323"},"PeriodicalIF":2.9,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144852544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carmela Bernardo, Marta Catillo, Antonio Pecchia, Francesco Vasca, Umberto Villano
{"title":"SPREADSHOT: Analysis of fake news spreading through topic modeling and bipartite weighted graphs","authors":"Carmela Bernardo, Marta Catillo, Antonio Pecchia, Francesco Vasca, Umberto Villano","doi":"10.1016/j.osnem.2025.100324","DOIUrl":"10.1016/j.osnem.2025.100324","url":null,"abstract":"<div><div>Spreading of fake news is one of the primary drivers of misinformation in social networks. Graph-based approaches that analyze fake news dissemination are mostly dedicated to fake news detection and consider homogeneous tree-based networks obtained by following the diffusion of single messages through users, thus lacking the ability to identify implicit patterns among spreaders and topics. Alternatively, heterogeneous graphs have been proposed, although the detection remains their main goal and the use of graph centralities is rather limited. In this paper, bipartite weighted graphs are used to analyze fake news and spreaders by utilizing topic modeling and a combination of network centrality measures. The proposed architecture, called SPREADSHOT, leverages a topic modeling technique to identify key topics or subjects within a collection of fake news articles published by spreaders, thus generating a bipartite weighted graph. By projecting the graph model to the space of spreaders, one can identify the strengths of links between them in terms of fakeness correlation on common topics. Moreover, the closeness and betweennes centralities highlight spreaders who represent key enablers in the dissemination of fakeness on different topics. The projection of the bipartite graph to the space of topics allows one to identify topics which are more prone to misinformation. By collecting specific network measures, a synthetic fakeness networking index is defined which characterizes the behaviors and roles of spreaders and topics in the fakeness dissemination. The effectiveness of the proposed technique is demonstrated through tests on the LIAR dataset.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"49 ","pages":"Article 100324"},"PeriodicalIF":2.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144758057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luigia Costabile , Gian Marco Orlando , Valerio La Gatta , Vincenzo Moscato
{"title":"Assessing the potential of generative agents in crowdsourced fact-checking","authors":"Luigia Costabile , Gian Marco Orlando , Valerio La Gatta , Vincenzo Moscato","doi":"10.1016/j.osnem.2025.100326","DOIUrl":"10.1016/j.osnem.2025.100326","url":null,"abstract":"<div><div>The growing spread of online misinformation has created an urgent need for scalable, reliable fact-checking solutions. Crowdsourced fact-checking—where non-experts evaluate claim veracity—offers a cost-effective alternative to expert verification, despite concerns about variability in quality and bias. Encouraged by promising results in certain contexts, major platforms such as X (formerly Twitter), Facebook, and Instagram have begun shifting from centralized moderation to decentralized, crowd-based approaches.</div><div>In parallel, advances in Large Language Models (LLMs) have shown strong performance across core fact-checking tasks, including claim detection and evidence evaluation. However, their potential role in crowdsourced workflows remains unexplored. This paper investigates whether LLM-powered generative agents—autonomous entities that emulate human behavior and decision-making—can meaningfully contribute to fact-checking tasks traditionally reserved for human crowds.</div><div>Using the protocol of La Barbera et al. (2024), we simulate crowds of generative agents with diverse demographic and ideological profiles. Agents retrieve evidence, assess claims along multiple quality dimensions, and issue final veracity judgments. Our results show that agent crowds outperform human crowds in truthfulness classification, exhibit higher internal consistency, and show reduced susceptibility to social and cognitive biases. Compared to humans, agents rely more systematically on informative criteria such as <em>Accuracy</em>, <em>Precision</em>, and <em>Informativeness</em>, suggesting a more structured decision-making process. Overall, our findings highlight the potential of generative agents as scalable, consistent, and less biased contributors to crowd-based fact-checking systems.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100326"},"PeriodicalIF":0.0,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Omran Berjawi , Giuseppe Fenza , Rida Khatoun , Vincenzo Loia
{"title":"Mitigating radicalization in recommender systems by rewiring graph with deep reinforcement learning","authors":"Omran Berjawi , Giuseppe Fenza , Rida Khatoun , Vincenzo Loia","doi":"10.1016/j.osnem.2025.100325","DOIUrl":"10.1016/j.osnem.2025.100325","url":null,"abstract":"<div><div>Recommender systems play a crucial role in enhancing user experiences by suggesting content based on users consumption histories. However, a significant challenge they encounter is managing the radicalized contents spreading and preventing users from becoming trapped in radicalized pathways. This paper address the radicalization problem in recommendation systems (RS) by proposing a graph-based approach called Deep Reinforcement Learning Graph Rewiring (DRLGR). First, we measure the radicalization score (Rad(G)) for the recommendation graph by assessing the extent of users’ exposure to radical content. Second, we develop a Reinforcement Learning (RL) method, which learns over time which edges among many possible ones should be rewired to reduce the Rad(G). The experimental results on video and news recommendation datasets show that DRLGR consistently reduces the radicalization score and demonstrates more sustained improvements over time, particularly in more complex graphs compared to baseline methods and heuristic approach such as HEU that may reduce radicalization more rapidly in the early stages with fewer interventions but plateau over time.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100325"},"PeriodicalIF":0.0,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144711820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational analysis of Information Disorder in Cognitive Warfare","authors":"Angelo Gaeta , Vincenzo Loia , Angelo Lorusso , Francesco Orciuoli , Antonella Pascuzzo","doi":"10.1016/j.osnem.2025.100322","DOIUrl":"10.1016/j.osnem.2025.100322","url":null,"abstract":"<div><div>Cognitive Warfare represents the modern evolution of traditional conflict, where the human mind emerges as the primary battleground, and information serves as a weapon to influence people’s thoughts, perceptions, and behaviors. Adopting the Information Disorder perspective, this work meticulously explores the phenomena associated with Cognitive Warfare, particularly as they spread across online social networks and media, to better understand their textual nature. In particular, the work focuses on specific cognitive weapons predominantly used by malicious actors in this context, such as the dissemination of misleading political news, junk science, and conspiracy theories. Therefore, the paper proposes an approach to identify, extract, and assess text-based features able to characterize the forms of Information Disorder involved in Cognitive Warfare. The proposed approach starts with a literature review and ends by assessing the identified and selected features through comprehensive experimentation based on a well-known dataset and conducted through the application of machine learning methods. In particular, by applying the Rough Set Theory and explainable AI it is found that features belonging to readability, psychological, and linguistic categories demonstrate a significant contribution in classifying the aforementioned forms of disorder. The obtained results are highly valuable as they can be leveraged to analyze critical aspects of Information Disorder, such as identifying the intent behind manipulated content and its targeted audience.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100322"},"PeriodicalIF":0.0,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}