社区问答平台中答案检索的特定领域用户专业知识估计

Wern Han Lim, Mark James Carman, S. J. Wong
{"title":"社区问答平台中答案检索的特定领域用户专业知识估计","authors":"Wern Han Lim, Mark James Carman, S. J. Wong","doi":"10.1145/3015022.3015032","DOIUrl":null,"url":null,"abstract":"Community Question-Answering (CQA) platforms leverage the inherent wisdom of the crowd - enabling users to retrieve quality information from domain experts through natural language. An important and challenging task is to identify reliable and trusted experts on large popular CQA platforms. State-of-the-art graph-based approaches to expertise estimation consider only user-user interactions without taking the relative contribution of individual answers into account, while pairwise-comparison approaches consider only pairs involving the best-answerer of each question. This research argues that there is a need to account for the user's relative contribution towards solving the question when estimating user expertise and proposes a content-agnostic measure of user contributions. This addition is incorporated into a competition-based approach for ranking users' question answering ability. The paper analyses how improvements in user expertise estimation impact on applications in expert search and answer quality prediction. Experiments using the Yahoo! Chiebukuro data show encouraging performance improvements and robustness over state-of-the-art approaches.","PeriodicalId":334601,"journal":{"name":"Proceedings of the 21st Australasian Document Computing Symposium","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Estimating Domain-Specific User Expertise for Answer Retrieval in Community Question-Answering Platforms\",\"authors\":\"Wern Han Lim, Mark James Carman, S. J. Wong\",\"doi\":\"10.1145/3015022.3015032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Community Question-Answering (CQA) platforms leverage the inherent wisdom of the crowd - enabling users to retrieve quality information from domain experts through natural language. An important and challenging task is to identify reliable and trusted experts on large popular CQA platforms. State-of-the-art graph-based approaches to expertise estimation consider only user-user interactions without taking the relative contribution of individual answers into account, while pairwise-comparison approaches consider only pairs involving the best-answerer of each question. This research argues that there is a need to account for the user's relative contribution towards solving the question when estimating user expertise and proposes a content-agnostic measure of user contributions. This addition is incorporated into a competition-based approach for ranking users' question answering ability. The paper analyses how improvements in user expertise estimation impact on applications in expert search and answer quality prediction. Experiments using the Yahoo! Chiebukuro data show encouraging performance improvements and robustness over state-of-the-art approaches.\",\"PeriodicalId\":334601,\"journal\":{\"name\":\"Proceedings of the 21st Australasian Document Computing Symposium\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 21st Australasian Document Computing Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3015022.3015032\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 21st Australasian Document Computing Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3015022.3015032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

社区问答(CQA)平台利用人群固有的智慧——使用户能够通过自然语言从领域专家那里检索高质量的信息。一项重要且具有挑战性的任务是在大型流行的CQA平台上识别可靠且值得信赖的专家。最先进的基于图形的专业知识估计方法只考虑用户-用户交互,而不考虑个别答案的相对贡献,而两两比较方法只考虑涉及每个问题的最佳答案的对。本研究认为,在评估用户专业知识时,有必要考虑用户对解决问题的相对贡献,并提出了一种与内容无关的用户贡献度量。这个附加功能被整合到基于竞争的方法中,用于对用户的问题回答能力进行排名。本文分析了用户专业度估计的改进对专家搜索和答案质量预测应用的影响。使用Yahoo!Chiebukuro数据显示了令人鼓舞的性能改进和最先进方法的稳健性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Estimating Domain-Specific User Expertise for Answer Retrieval in Community Question-Answering Platforms
Community Question-Answering (CQA) platforms leverage the inherent wisdom of the crowd - enabling users to retrieve quality information from domain experts through natural language. An important and challenging task is to identify reliable and trusted experts on large popular CQA platforms. State-of-the-art graph-based approaches to expertise estimation consider only user-user interactions without taking the relative contribution of individual answers into account, while pairwise-comparison approaches consider only pairs involving the best-answerer of each question. This research argues that there is a need to account for the user's relative contribution towards solving the question when estimating user expertise and proposes a content-agnostic measure of user contributions. This addition is incorporated into a competition-based approach for ranking users' question answering ability. The paper analyses how improvements in user expertise estimation impact on applications in expert search and answer quality prediction. Experiments using the Yahoo! Chiebukuro data show encouraging performance improvements and robustness over state-of-the-art approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信