I. Dlala, Dorra Attiaoui, Arnaud Martin, B. B. Yaghlane
{"title":"Trolls Identification within an Uncertain Framework","authors":"I. Dlala, Dorra Attiaoui, Arnaud Martin, B. B. Yaghlane","doi":"10.1109/ICTAI.2014.153","DOIUrl":null,"url":null,"abstract":"The web plays an important role in people's social lives since the emergence of Web 2.0. It facilitates the interaction between users, gives them the possibility to freely interact, share and collaborate through social networks, online community forums, blogs, wikis and other online collaborative media. However, an other side of the web is negatively taken such as posting inflammatory messages. Thus, when dealing with the online community forums, the managers seek to always enhance the performance of such platforms. In fact, to keep the serenity and prohibit the disturbance of the normal atmosphere, managers always try to novice users against these malicious persons by posting such message (DO NOT FEED TROLLS). But, this kind of warning is not enough to reduce this phenomenon. In this context we propose a new approach for detecting malicious people also called 'Trolls' in order to allow community managers to take their ability to post online. To be more realistic, our proposal is defined within an uncertain framework. Based on the assumption consisting on the trolls' integration in the successful discussion threads, we try to detect the presence of such malicious users. Indeed, this method is based on a conflict measure of the belief function theory applied between the different messages of the thread. In order to show the feasibility and the result of our approach, we test it in different simulated data.","PeriodicalId":142794,"journal":{"name":"2014 IEEE 26th International Conference on Tools with Artificial Intelligence","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 26th International Conference on Tools with Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2014.153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
The web plays an important role in people's social lives since the emergence of Web 2.0. It facilitates the interaction between users, gives them the possibility to freely interact, share and collaborate through social networks, online community forums, blogs, wikis and other online collaborative media. However, an other side of the web is negatively taken such as posting inflammatory messages. Thus, when dealing with the online community forums, the managers seek to always enhance the performance of such platforms. In fact, to keep the serenity and prohibit the disturbance of the normal atmosphere, managers always try to novice users against these malicious persons by posting such message (DO NOT FEED TROLLS). But, this kind of warning is not enough to reduce this phenomenon. In this context we propose a new approach for detecting malicious people also called 'Trolls' in order to allow community managers to take their ability to post online. To be more realistic, our proposal is defined within an uncertain framework. Based on the assumption consisting on the trolls' integration in the successful discussion threads, we try to detect the presence of such malicious users. Indeed, this method is based on a conflict measure of the belief function theory applied between the different messages of the thread. In order to show the feasibility and the result of our approach, we test it in different simulated data.
自web 2.0出现以来,网络在人们的社会生活中扮演着重要的角色。它促进了用户之间的互动,使他们有可能通过社交网络、在线社区论坛、博客、维基和其他在线协作媒体自由互动、分享和协作。然而,网络的另一面是负面的,比如发布煽动性的信息。因此,在处理在线社区论坛时,管理者总是寻求提高这类平台的性能。事实上,为了保持平静,防止扰乱正常的氛围,管理员总是试图通过发布这样的消息(DO NOT FEED troll)来提醒新手用户反对这些恶意的人。但是,这种警告不足以减少这种现象。在这种情况下,我们提出了一种新的方法来检测恶意的人,也被称为“巨魔”,以便让社区管理者利用他们的能力在网上发帖。更实际地说,我们的建议是在一个不确定的框架内定义的。基于在成功的讨论线程中包含巨魔集成的假设,我们尝试检测此类恶意用户的存在。实际上,这种方法是基于在线程的不同消息之间应用的信念函数理论的冲突度量。为了证明该方法的可行性和效果,我们在不同的模拟数据中进行了测试。