基于用户评论项目范式的一种新的评论帮助度测度

IF 2.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Luca Pajola, Dongkai Chen, M. Conti, V. S. Subrahmanian
{"title":"基于用户评论项目范式的一种新的评论帮助度测度","authors":"Luca Pajola, Dongkai Chen, M. Conti, V. S. Subrahmanian","doi":"10.1145/3585280","DOIUrl":null,"url":null,"abstract":"Review platforms are viral online services where users share and read opinions about products (e.g., a smartphone) or experiences (e.g., a meal at a restaurant). Other users may be influenced by such opinions when making deciding what to buy. The usability of review platforms is currently limited by the massive number of opinions on many products. Therefore, showing only the most helpful reviews for each product is in the best interests of both users and the platform (e.g., Amazon). The current state of the art is far from accurate in predicting how helpful a review is. First, most existing works lack compelling comparisons as many studies are conducted on datasets that are not publicly available. As a consequence, new studies are not always built on top of prior baselines. Second, most existing research focuses only on features derived from the review text, ignoring other fundamental aspects of the review platforms (e.g., the other reviews of a product, the order in which they were submitted). In this paper, we first carefully review the most relevant works in the area published during the last 20 years. We then propose the User-Review-Item (URI) paradigm, a novel abstraction for modeling the problem that moves the focus of the feature engineering from the review to the platform level. We empirically validate the URI paradigm on a dataset of products from six Amazon categories with 270 trained models: on average, classifiers gain +4% in F1-score when considering the whole review platform context. In our experiments, we further emphasize some problems with the helpfulness prediction task: (1) the users’ writing style changes over time (i.e., concept drift), (2) past models do not generalize well across different review categories, and (3) past methods to generate the ground-truth produced unreliable helpfulness scores, affecting the model evaluation phase.","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":" ","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Novel Review Helpfulness Measure based on the User-Review-Item Paradigm\",\"authors\":\"Luca Pajola, Dongkai Chen, M. Conti, V. S. Subrahmanian\",\"doi\":\"10.1145/3585280\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Review platforms are viral online services where users share and read opinions about products (e.g., a smartphone) or experiences (e.g., a meal at a restaurant). Other users may be influenced by such opinions when making deciding what to buy. The usability of review platforms is currently limited by the massive number of opinions on many products. Therefore, showing only the most helpful reviews for each product is in the best interests of both users and the platform (e.g., Amazon). The current state of the art is far from accurate in predicting how helpful a review is. First, most existing works lack compelling comparisons as many studies are conducted on datasets that are not publicly available. As a consequence, new studies are not always built on top of prior baselines. Second, most existing research focuses only on features derived from the review text, ignoring other fundamental aspects of the review platforms (e.g., the other reviews of a product, the order in which they were submitted). In this paper, we first carefully review the most relevant works in the area published during the last 20 years. We then propose the User-Review-Item (URI) paradigm, a novel abstraction for modeling the problem that moves the focus of the feature engineering from the review to the platform level. We empirically validate the URI paradigm on a dataset of products from six Amazon categories with 270 trained models: on average, classifiers gain +4% in F1-score when considering the whole review platform context. In our experiments, we further emphasize some problems with the helpfulness prediction task: (1) the users’ writing style changes over time (i.e., concept drift), (2) past models do not generalize well across different review categories, and (3) past methods to generate the ground-truth produced unreliable helpfulness scores, affecting the model evaluation phase.\",\"PeriodicalId\":50940,\"journal\":{\"name\":\"ACM Transactions on the Web\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-02-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on the Web\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3585280\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on the Web","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3585280","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 2

摘要

评论平台是一种病毒式的在线服务,用户可以在这里分享和阅读对产品(如智能手机)或体验(如餐厅用餐)的看法。其他用户在决定购买什么时可能会受到这些意见的影响。评论平台的可用性目前受到许多产品的大量意见的限制。因此,只显示每种产品最有用的评论符合用户和平台(例如亚马逊)的最大利益。目前的技术水平在预测综述的帮助方面还远远不够准确。首先,大多数现有作品缺乏令人信服的比较,因为许多研究都是在未公开的数据集上进行的。因此,新的研究并不总是建立在先前的基线之上。其次,大多数现有研究只关注评论文本中的特征,而忽略了评论平台的其他基本方面(例如,产品的其他评论、提交顺序)。在本文中,我们首先仔细回顾了过去20年中发表的该领域最相关的作品。然后,我们提出了用户评审项目(URI)范式,这是一种用于建模问题的新颖抽象,将功能工程的重点从评审转移到平台级别。我们在六个亚马逊类别的产品数据集上用270个经过训练的模型实证验证了URI范式:在考虑整个评论平台上下文时,分类器的F1得分平均为+4%。在我们的实验中,我们进一步强调了有用性预测任务的一些问题:(1)用户的写作风格随着时间的推移而变化(即概念漂移),(2)过去的模型在不同的评论类别中不能很好地概括,(3)过去生成基本事实的方法产生了不可靠的有用性得分,影响了模型评估阶段。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Novel Review Helpfulness Measure based on the User-Review-Item Paradigm
Review platforms are viral online services where users share and read opinions about products (e.g., a smartphone) or experiences (e.g., a meal at a restaurant). Other users may be influenced by such opinions when making deciding what to buy. The usability of review platforms is currently limited by the massive number of opinions on many products. Therefore, showing only the most helpful reviews for each product is in the best interests of both users and the platform (e.g., Amazon). The current state of the art is far from accurate in predicting how helpful a review is. First, most existing works lack compelling comparisons as many studies are conducted on datasets that are not publicly available. As a consequence, new studies are not always built on top of prior baselines. Second, most existing research focuses only on features derived from the review text, ignoring other fundamental aspects of the review platforms (e.g., the other reviews of a product, the order in which they were submitted). In this paper, we first carefully review the most relevant works in the area published during the last 20 years. We then propose the User-Review-Item (URI) paradigm, a novel abstraction for modeling the problem that moves the focus of the feature engineering from the review to the platform level. We empirically validate the URI paradigm on a dataset of products from six Amazon categories with 270 trained models: on average, classifiers gain +4% in F1-score when considering the whole review platform context. In our experiments, we further emphasize some problems with the helpfulness prediction task: (1) the users’ writing style changes over time (i.e., concept drift), (2) past models do not generalize well across different review categories, and (3) past methods to generate the ground-truth produced unreliable helpfulness scores, affecting the model evaluation phase.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACM Transactions on the Web
ACM Transactions on the Web 工程技术-计算机:软件工程
CiteScore
4.90
自引率
0.00%
发文量
26
审稿时长
7.5 months
期刊介绍: Transactions on the Web (TWEB) is a journal publishing refereed articles reporting the results of research on Web content, applications, use, and related enabling technologies. Topics in the scope of TWEB include but are not limited to the following: Browsers and Web Interfaces; Electronic Commerce; Electronic Publishing; Hypertext and Hypermedia; Semantic Web; Web Engineering; Web Services; and Service-Oriented Computing XML. In addition, papers addressing the intersection of the following broader technologies with the Web are also in scope: Accessibility; Business Services Education; Knowledge Management and Representation; Mobility and pervasive computing; Performance and scalability; Recommender systems; Searching, Indexing, Classification, Retrieval and Querying, Data Mining and Analysis; Security and Privacy; and User Interfaces. Papers discussing specific Web technologies, applications, content generation and management and use are within scope. Also, papers describing novel applications of the web as well as papers on the underlying technologies are welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信