Conversational Resilience: Quantifying and Predicting Conversational Outcomes Following Adverse Events

Charlotte L. Lambert, A. Rajagopal, Eshwar Chandrasekharan
{"title":"Conversational Resilience: Quantifying and Predicting Conversational Outcomes Following Adverse Events","authors":"Charlotte L. Lambert, A. Rajagopal, Eshwar Chandrasekharan","doi":"10.1609/icwsm.v16i1.19314","DOIUrl":null,"url":null,"abstract":"Online conversations, just like offline ones, are susceptible to influence by bad actors. These users have the capacity to derail neutral or even prosocial discussions through adverse behavior. Moderators and users alike would benefit from more resilient online conversations, i.e., those that can survive the influx of adverse behavior to which many conversations fall victim. In this paper, we examine the notion of conversational resilience: what makes a conversation more or less capable of withstanding an adverse interruption? Working with 11.5M comments from eight mainstream subreddits, we compiled more than 5.8M comment threads (i.e., conversations). Using 239K relevant conversations, we examine how well comment, user, and subreddit characteristics can predict conversational outcomes. More than half of all conversations proceed after the first adverse event. Six out of ten conversations that proceed result in future removals. Comments violating platform-wide norms and those written by authors with a history of norm violations lead to not only more norm violations, but also fewer prosocial outcomes. However, conversations in more populated subreddits and conversations where the first adverse event's author was initially a strong contributor are capable of minimizing future removals and promoting prosocial outcomes after an adverse event. By understanding factors that contribute to conversational resilience we shed light onto what types of behavior can be encouraged to promote prosocial outcomes even in the face of adversity.","PeriodicalId":175641,"journal":{"name":"International Conference on Web and Social Media","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Web and Social Media","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/icwsm.v16i1.19314","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Online conversations, just like offline ones, are susceptible to influence by bad actors. These users have the capacity to derail neutral or even prosocial discussions through adverse behavior. Moderators and users alike would benefit from more resilient online conversations, i.e., those that can survive the influx of adverse behavior to which many conversations fall victim. In this paper, we examine the notion of conversational resilience: what makes a conversation more or less capable of withstanding an adverse interruption? Working with 11.5M comments from eight mainstream subreddits, we compiled more than 5.8M comment threads (i.e., conversations). Using 239K relevant conversations, we examine how well comment, user, and subreddit characteristics can predict conversational outcomes. More than half of all conversations proceed after the first adverse event. Six out of ten conversations that proceed result in future removals. Comments violating platform-wide norms and those written by authors with a history of norm violations lead to not only more norm violations, but also fewer prosocial outcomes. However, conversations in more populated subreddits and conversations where the first adverse event's author was initially a strong contributor are capable of minimizing future removals and promoting prosocial outcomes after an adverse event. By understanding factors that contribute to conversational resilience we shed light onto what types of behavior can be encouraged to promote prosocial outcomes even in the face of adversity.
会话弹性:量化和预测不良事件后的会话结果
在线对话和线下对话一样,容易受到不良行为者的影响。这些用户有能力通过不良行为破坏中立甚至亲社会的讨论。版主和用户都将受益于更具弹性的在线对话,也就是说,那些能够在大量不良行为中幸存下来的对话是许多对话的受害者。在本文中,我们研究了对话弹性的概念:是什么使对话或多或少能够承受不利的中断?与来自8个主流子reddit的1150万条评论一起工作,我们编译了超过580万条评论线程(即对话)。使用239K相关对话,我们检查评论,用户和子reddit特征如何很好地预测对话结果。一半以上的谈话是在第一次不良事件发生后进行的。十分之六的对话会导致未来的删除。违反整个平台规范的评论以及有违反规范历史的作者所写的评论不仅会导致更多的违反规范,而且亲社会结果也会减少。然而,在更多人的子reddit上的对话,以及第一个不良事件的作者最初是一个强有力的贡献者的对话,能够最大限度地减少未来的移除,并在不良事件发生后促进亲社会的结果。通过理解有助于对话弹性的因素,我们揭示了即使面对逆境,也可以鼓励哪些类型的行为来促进亲社会结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信