专业知识检索算法的评价

Gaya K. Jayasinghe, Sarvnaz Karimi, M. Ayre
{"title":"专业知识检索算法的评价","authors":"Gaya K. Jayasinghe, Sarvnaz Karimi, M. Ayre","doi":"10.1145/3015022.3015035","DOIUrl":null,"url":null,"abstract":"Evaluation of expertise search systems is a non-trivial task. While in a typical search engine the responses to user queries are documents, the search results for an expertise retrieval system are people. The relevancy scores indicate how knowledgeable they are on a given topic. Within an organisation, such a ranking of employees could potentially be difficult as well as controversial. We introduce an in-house capability search system built for an organisation with a diverse range of disciplines. We report on two attempts of evaluating six different ranking algorithms implemented for this system. Evaluating the system using relevance judgements produced in each of the two attempts leads to an understanding of how different methods of collecting judgements on people's expertise can lead to different effectiveness of algorithms.","PeriodicalId":334601,"journal":{"name":"Proceedings of the 21st Australasian Document Computing Symposium","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluation of Retrieval Algorithms for Expertise Search\",\"authors\":\"Gaya K. Jayasinghe, Sarvnaz Karimi, M. Ayre\",\"doi\":\"10.1145/3015022.3015035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evaluation of expertise search systems is a non-trivial task. While in a typical search engine the responses to user queries are documents, the search results for an expertise retrieval system are people. The relevancy scores indicate how knowledgeable they are on a given topic. Within an organisation, such a ranking of employees could potentially be difficult as well as controversial. We introduce an in-house capability search system built for an organisation with a diverse range of disciplines. We report on two attempts of evaluating six different ranking algorithms implemented for this system. Evaluating the system using relevance judgements produced in each of the two attempts leads to an understanding of how different methods of collecting judgements on people's expertise can lead to different effectiveness of algorithms.\",\"PeriodicalId\":334601,\"journal\":{\"name\":\"Proceedings of the 21st Australasian Document Computing Symposium\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 21st Australasian Document Computing Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3015022.3015035\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 21st Australasian Document Computing Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3015022.3015035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

专业知识搜索系统的评估是一项非常重要的任务。在典型的搜索引擎中,对用户查询的响应是文档,而专业知识检索系统的搜索结果是人。相关性分数表明他们对给定主题的了解程度。在一个组织内,这样的员工排名可能会很困难,也会引起争议。我们引入了一个内部能力搜索系统,该系统是为一个具有不同学科范围的组织而构建的。我们报告了对该系统实现的六种不同排名算法的两次评估尝试。使用在两次尝试中产生的相关性判断来评估系统,可以了解收集人们专业知识判断的不同方法如何导致算法的不同有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluation of Retrieval Algorithms for Expertise Search
Evaluation of expertise search systems is a non-trivial task. While in a typical search engine the responses to user queries are documents, the search results for an expertise retrieval system are people. The relevancy scores indicate how knowledgeable they are on a given topic. Within an organisation, such a ranking of employees could potentially be difficult as well as controversial. We introduce an in-house capability search system built for an organisation with a diverse range of disciplines. We report on two attempts of evaluating six different ranking algorithms implemented for this system. Evaluating the system using relevance judgements produced in each of the two attempts leads to an understanding of how different methods of collecting judgements on people's expertise can lead to different effectiveness of algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信