基于部分相关判断的检索系统排序方法

Shengli Wu, S. McClean
{"title":"基于部分相关判断的检索系统排序方法","authors":"Shengli Wu, S. McClean","doi":"10.1109/ICDIM.2007.4444193","DOIUrl":null,"url":null,"abstract":"Some measures such as mean average precision and recall level precision are considered as good system-oriented measures, because they concern both precision and recall that are two important aspects for effectiveness evaluation of information retrieval systems. However, such good system-oriented measures suffer from some shortcomings when partial relevance judgment is used. In this paper, we discuss how to rank retrieval systems in the condition of partial relevance judgment, which is common in major retrieval evaluation events such as TREC conferences and NTCIR workshops. Four system-oriented measures, which are mean average precision, recall level precision, normalized discount cumulative gain, and normalized average precision over all documents, are discussed. Our investigation shows that averaging values over a set of queries may not be the most reliable approach to rank a group of retrieval systems. Some alternatives such as Bar da count. Condorcet voting, and the zero-one normalization method, are investigated. Experimental results are also presented for the evaluation of these methods.","PeriodicalId":198626,"journal":{"name":"2007 2nd International Conference on Digital Information Management","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Several methods of ranking retrieval systems with partial relevance judgment\",\"authors\":\"Shengli Wu, S. McClean\",\"doi\":\"10.1109/ICDIM.2007.4444193\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Some measures such as mean average precision and recall level precision are considered as good system-oriented measures, because they concern both precision and recall that are two important aspects for effectiveness evaluation of information retrieval systems. However, such good system-oriented measures suffer from some shortcomings when partial relevance judgment is used. In this paper, we discuss how to rank retrieval systems in the condition of partial relevance judgment, which is common in major retrieval evaluation events such as TREC conferences and NTCIR workshops. Four system-oriented measures, which are mean average precision, recall level precision, normalized discount cumulative gain, and normalized average precision over all documents, are discussed. Our investigation shows that averaging values over a set of queries may not be the most reliable approach to rank a group of retrieval systems. Some alternatives such as Bar da count. Condorcet voting, and the zero-one normalization method, are investigated. Experimental results are also presented for the evaluation of these methods.\",\"PeriodicalId\":198626,\"journal\":{\"name\":\"2007 2nd International Conference on Digital Information Management\",\"volume\":\"116 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 2nd International Conference on Digital Information Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDIM.2007.4444193\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 2nd International Conference on Digital Information Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDIM.2007.4444193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

平均精度和查全率被认为是很好的面向系统的度量,因为它们既涉及查全率,也涉及查全率,而查全率是信息检索系统有效性评价的两个重要方面。然而,当使用部分相关性判断时,这种良好的面向系统的度量存在一些不足。本文讨论了在TREC会议和NTCIR研讨会等重大检索评价活动中常见的部分相关判断条件下如何对检索系统进行排序。讨论了四种面向系统的度量,即平均精度、召回水平精度、归一化折扣累积增益和所有文档的归一化平均精度。我们的调查表明,对一组查询的平均值可能不是对一组检索系统进行排名的最可靠的方法。一些替代方案,如Bar da计数。研究了孔多塞投票和0 - 1归一化方法。实验结果对这些方法进行了评价。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Several methods of ranking retrieval systems with partial relevance judgment
Some measures such as mean average precision and recall level precision are considered as good system-oriented measures, because they concern both precision and recall that are two important aspects for effectiveness evaluation of information retrieval systems. However, such good system-oriented measures suffer from some shortcomings when partial relevance judgment is used. In this paper, we discuss how to rank retrieval systems in the condition of partial relevance judgment, which is common in major retrieval evaluation events such as TREC conferences and NTCIR workshops. Four system-oriented measures, which are mean average precision, recall level precision, normalized discount cumulative gain, and normalized average precision over all documents, are discussed. Our investigation shows that averaging values over a set of queries may not be the most reliable approach to rank a group of retrieval systems. Some alternatives such as Bar da count. Condorcet voting, and the zero-one normalization method, are investigated. Experimental results are also presented for the evaluation of these methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信