DeepTrust: An Automatic Framework to Detect Trustworthy Users in Opinion-based Systems

Edoardo Serra, Anu Shrestha, Francesca Spezzano, A. Squicciarini
{"title":"DeepTrust: An Automatic Framework to Detect Trustworthy Users in Opinion-based Systems","authors":"Edoardo Serra, Anu Shrestha, Francesca Spezzano, A. Squicciarini","doi":"10.1145/3374664.3375744","DOIUrl":null,"url":null,"abstract":"Opinion spamming has recently gained attention as more and more online platforms rely on users' opinions to help potential customers make informed decisions on products and services. Yet, while work on opinion spamming abounds, most efforts have focused on detecting an individual reviewer as spammer or fraudulent. We argue that this is no longer sufficient, as reviewers may contribute to an opinion-based system in various ways, and their input could range from highly informative to noisy or even malicious. In an effort to improve the detection of trustworthy individuals within opinion-based systems, in this paper, we develop a supervised approach to differentiate among different types of reviewers. Particularly, we model the problem of detecting trustworthy reviewers as a multi-class classification problem, wherein users may be fraudulent, unreliable or uninformative, or trustworthy. We note that expanding from the classic binary classification of trustworthy/untrustworthy (or malicious) reviewers is an interesting and challenging problem. Some untrustworthy reviewers may behave similarly to reliable reviewers, and yet be rooted by dark motives. On the contrary, other untrustworthy reviewers may not be malicious but rather lazy or unable to contribute to the common knowledge of the reviewed item. Our proposed method, DeepTrust, relies on a deep recurrent neural network that provides embeddings aggregating temporal information: we consider users' behavior over time, as they review multiple products. We model the interactions of reviewers and the products they review using a temporal bipartite graph and consider the context of each rating by including other reviewers' ratings of the same items. We carry out extensive experiments on a real-world dataset of Amazon reviewers, with known ground truth about spammers and fraudulent reviews. Our results show that DeepTrust can detect trustworthy, uninformative, and fraudulent users with an F1-measure of 0.93. Also, we drastically improve on detecting fraudulent reviewers (AUROC of 0.97 and average precision of 0.99 when combining DeepTrust with the F&G algorithm) as compared to REV2 state-of-the-art methods (AUROC of 0.79 and average precision of 0.48). Further, DeepTrust is robust to cold start users and overperforms all existing baselines.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3374664.3375744","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

Abstract

Opinion spamming has recently gained attention as more and more online platforms rely on users' opinions to help potential customers make informed decisions on products and services. Yet, while work on opinion spamming abounds, most efforts have focused on detecting an individual reviewer as spammer or fraudulent. We argue that this is no longer sufficient, as reviewers may contribute to an opinion-based system in various ways, and their input could range from highly informative to noisy or even malicious. In an effort to improve the detection of trustworthy individuals within opinion-based systems, in this paper, we develop a supervised approach to differentiate among different types of reviewers. Particularly, we model the problem of detecting trustworthy reviewers as a multi-class classification problem, wherein users may be fraudulent, unreliable or uninformative, or trustworthy. We note that expanding from the classic binary classification of trustworthy/untrustworthy (or malicious) reviewers is an interesting and challenging problem. Some untrustworthy reviewers may behave similarly to reliable reviewers, and yet be rooted by dark motives. On the contrary, other untrustworthy reviewers may not be malicious but rather lazy or unable to contribute to the common knowledge of the reviewed item. Our proposed method, DeepTrust, relies on a deep recurrent neural network that provides embeddings aggregating temporal information: we consider users' behavior over time, as they review multiple products. We model the interactions of reviewers and the products they review using a temporal bipartite graph and consider the context of each rating by including other reviewers' ratings of the same items. We carry out extensive experiments on a real-world dataset of Amazon reviewers, with known ground truth about spammers and fraudulent reviews. Our results show that DeepTrust can detect trustworthy, uninformative, and fraudulent users with an F1-measure of 0.93. Also, we drastically improve on detecting fraudulent reviewers (AUROC of 0.97 and average precision of 0.99 when combining DeepTrust with the F&G algorithm) as compared to REV2 state-of-the-art methods (AUROC of 0.79 and average precision of 0.48). Further, DeepTrust is robust to cold start users and overperforms all existing baselines.
深度信任:在基于意见的系统中检测可信用户的自动框架
随着越来越多的在线平台依靠用户的意见来帮助潜在客户对产品和服务做出明智的决定,垃圾意见最近引起了人们的关注。然而,虽然针对垃圾意见的工作比比皆是,但大多数工作都集中在检测个人评论者是垃圾邮件制造者或欺诈者上。我们认为这已经不够了,因为评论者可能以各种方式为基于意见的系统做出贡献,他们的输入可能从高信息量到嘈杂甚至恶意。为了改进基于意见的系统中可信赖个体的检测,在本文中,我们开发了一种监督方法来区分不同类型的审稿人。特别是,我们将检测值得信赖的审稿人的问题建模为一个多类分类问题,其中用户可能是欺诈的、不可靠的或信息不足的,或者是值得信赖的。我们注意到,从可信/不可信(或恶意)审稿人的经典二元分类扩展是一个有趣且具有挑战性的问题。一些不值得信任的审稿人的行为可能与可靠的审稿人相似,但却有着阴暗的动机。相反,其他不值得信任的审阅者可能不是恶意的,而是懒惰或无法为审阅项目的共同知识做出贡献。我们提出的方法,DeepTrust,依赖于一个深度递归神经网络,该网络提供嵌入聚合时间信息:我们考虑用户随着时间的推移的行为,因为他们审查了多个产品。我们使用时间二部图对评论者和他们所评论的产品之间的交互进行建模,并通过包括其他评论者对相同项目的评级来考虑每个评级的上下文。我们在亚马逊评论者的真实世界数据集上进行了广泛的实验,了解了垃圾邮件制造者和欺诈性评论的真实情况。我们的研究结果表明,DeepTrust可以检测出值得信赖、缺乏信息和欺诈的用户,其f1测量值为0.93。此外,与REV2最先进的方法(AUROC为0.79,平均精度为0.48)相比,我们大大提高了检测欺诈性审稿人的能力(将DeepTrust与F&G算法结合时AUROC为0.97,平均精度为0.99)。此外,DeepTrust对冷启动用户具有鲁棒性,并且优于所有现有基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信