{"title":"Exploiting User Feedback for Expert Finding in Community Question Answering","authors":"Xiang Cheng, Shuguang Zhu, Gang Chen, Sen Su","doi":"10.1109/ICDMW.2015.181","DOIUrl":null,"url":null,"abstract":"Community Question Answering (CQA) is a popular online service for people asking and answering questions. Recently, with accumulation of users and contents in CQA platforms, their answer quality has aroused wide concern. Expert finding has been proposed as one way to address such problem, which aims at finding suitable answerers who can give high-quality answers. In this paper, we formalize expert finding as a learning to rank task by leveraging the user feedback on answers (i.e., the votes of answers) as the \"relevance\" labels. To achieve this task, we present a listwise learning to rank approach, which is referred to as ListEF. In the ListEF approach, realizing that questions in CQA are relatively short and usually attached with tags, we propose a tagword topic model (TTM) to derive high-quality topical representations of questions. Based on TTM, we develop a COmpetition-based User exPertise Extraction (COUPE) method to capture user expertise features for given questions. We adopt the widely used listwise learning to rank method LambdaMART to train the ranking function. Finally, for a given question, we rank candidate users in descending order of the scores calculated by the trained ranking function, and select the users with high rankings as candidate experts. Experimental results on Stack Overflow show both our TTM and ListEF approach are effective with significant improvements over state-of-art methods.","PeriodicalId":192888,"journal":{"name":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Data Mining Workshop (ICDMW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDMW.2015.181","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Community Question Answering (CQA) is a popular online service for people asking and answering questions. Recently, with accumulation of users and contents in CQA platforms, their answer quality has aroused wide concern. Expert finding has been proposed as one way to address such problem, which aims at finding suitable answerers who can give high-quality answers. In this paper, we formalize expert finding as a learning to rank task by leveraging the user feedback on answers (i.e., the votes of answers) as the "relevance" labels. To achieve this task, we present a listwise learning to rank approach, which is referred to as ListEF. In the ListEF approach, realizing that questions in CQA are relatively short and usually attached with tags, we propose a tagword topic model (TTM) to derive high-quality topical representations of questions. Based on TTM, we develop a COmpetition-based User exPertise Extraction (COUPE) method to capture user expertise features for given questions. We adopt the widely used listwise learning to rank method LambdaMART to train the ranking function. Finally, for a given question, we rank candidate users in descending order of the scores calculated by the trained ranking function, and select the users with high rankings as candidate experts. Experimental results on Stack Overflow show both our TTM and ListEF approach are effective with significant improvements over state-of-art methods.