Efficient Crowd-Powered Active Learning for Reliable Review Evaluation

Xinping Min, Yuliang Shi, Li-zhen Cui, Han Yu, Yuan Miao
{"title":"Efficient Crowd-Powered Active Learning for Reliable Review Evaluation","authors":"Xinping Min, Yuliang Shi, Li-zhen Cui, Han Yu, Yuan Miao","doi":"10.1145/3126973.3129307","DOIUrl":null,"url":null,"abstract":"To mitigate uncertainty in the quality of online purchases (e.g., e-commerce), many people rely on review comments from others in their decision-making processes. The key challenge in this situation is how to identify useful comments among a large corpus of candidate review comments with potentially varying usefulness. In this paper, we propose the Reliable Review Evaluation Framework (RREF) which combines crowdsourcing with machine learning to address this problem. To improve crowdsourcing quality control, we propose a novel review query crowdsourcing approach which jointly considers workers' track records in review provision and current workloads when allocating review comments for workers to rate. Using the ratings crowdsourced from workers, RREF then enhances the adaptive topic classification model selection and weighting functions of AdaBoost with dynamic keyword list reconstruction. RREF has been compared with state-of-the-art related frameworks using a large-scale real-world dataset, and demonstrated over 50% reduction in average classification errors.","PeriodicalId":370356,"journal":{"name":"International Conference on Crowd Science and Engineering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Crowd Science and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3126973.3129307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

To mitigate uncertainty in the quality of online purchases (e.g., e-commerce), many people rely on review comments from others in their decision-making processes. The key challenge in this situation is how to identify useful comments among a large corpus of candidate review comments with potentially varying usefulness. In this paper, we propose the Reliable Review Evaluation Framework (RREF) which combines crowdsourcing with machine learning to address this problem. To improve crowdsourcing quality control, we propose a novel review query crowdsourcing approach which jointly considers workers' track records in review provision and current workloads when allocating review comments for workers to rate. Using the ratings crowdsourced from workers, RREF then enhances the adaptive topic classification model selection and weighting functions of AdaBoost with dynamic keyword list reconstruction. RREF has been compared with state-of-the-art related frameworks using a large-scale real-world dataset, and demonstrated over 50% reduction in average classification errors.
有效的群体动力主动学习可靠的审查评估
为了减轻网上购物(如电子商务)质量的不确定性,许多人在决策过程中依赖他人的评论。在这种情况下,关键的挑战是如何在大量候选审查评论中识别有用的评论,这些评论可能具有不同的用途。在本文中,我们提出了可靠评审评估框架(RREF),它结合了众包和机器学习来解决这个问题。为了提高众包质量控制,我们提出了一种新的评审查询众包方法,该方法在分配评审意见给员工打分时,同时考虑了员工在评审提供中的跟踪记录和当前工作量。然后,RREF利用工人众包的评分,通过动态关键词列表重建增强AdaBoost的自适应主题分类模型选择和权重函数。使用大规模的真实世界数据集将RREF与最先进的相关框架进行了比较,并证明平均分类错误减少了50%以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信