{"title":"信息检索评价中的贝叶斯推理","authors":"Ben Carterette","doi":"10.1145/2808194.2809469","DOIUrl":null,"url":null,"abstract":"A key component of experimentation in IR is statistical hypothesis testing, which researchers and developers use to make inferences about the effectiveness of their system relative to others. A statistical hypothesis test can tell us the likelihood that small mean differences in effectiveness (on the order of 5%, say) is due to randomness or measurement error, and thus is critical for making progress in research. But the tests typically used in IR - the t-test, the Wilcoxon signed-rank test - are very general, not developed specifically for the problems we face in information retrieval evaluation. A better approach would take advantage of the fact that the atomic unit of measurement in IR is the relevance judgment rather than the effectiveness measure, and develop tests that model relevance directly. In this work we present such an approach, showing theoretically that modeling relevance in this way naturally gives rise to the effectiveness measures we care about. We demonstrate the usefulness of our model on both simulated data and a diverse set of runs from various TREC tracks.","PeriodicalId":440325,"journal":{"name":"Proceedings of the 2015 International Conference on The Theory of Information Retrieval","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":"{\"title\":\"Bayesian Inference for Information Retrieval Evaluation\",\"authors\":\"Ben Carterette\",\"doi\":\"10.1145/2808194.2809469\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A key component of experimentation in IR is statistical hypothesis testing, which researchers and developers use to make inferences about the effectiveness of their system relative to others. A statistical hypothesis test can tell us the likelihood that small mean differences in effectiveness (on the order of 5%, say) is due to randomness or measurement error, and thus is critical for making progress in research. But the tests typically used in IR - the t-test, the Wilcoxon signed-rank test - are very general, not developed specifically for the problems we face in information retrieval evaluation. A better approach would take advantage of the fact that the atomic unit of measurement in IR is the relevance judgment rather than the effectiveness measure, and develop tests that model relevance directly. In this work we present such an approach, showing theoretically that modeling relevance in this way naturally gives rise to the effectiveness measures we care about. We demonstrate the usefulness of our model on both simulated data and a diverse set of runs from various TREC tracks.\",\"PeriodicalId\":440325,\"journal\":{\"name\":\"Proceedings of the 2015 International Conference on The Theory of Information Retrieval\",\"volume\":\"2012 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2015 International Conference on The Theory of Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2808194.2809469\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2015 International Conference on The Theory of Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2808194.2809469","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bayesian Inference for Information Retrieval Evaluation
A key component of experimentation in IR is statistical hypothesis testing, which researchers and developers use to make inferences about the effectiveness of their system relative to others. A statistical hypothesis test can tell us the likelihood that small mean differences in effectiveness (on the order of 5%, say) is due to randomness or measurement error, and thus is critical for making progress in research. But the tests typically used in IR - the t-test, the Wilcoxon signed-rank test - are very general, not developed specifically for the problems we face in information retrieval evaluation. A better approach would take advantage of the fact that the atomic unit of measurement in IR is the relevance judgment rather than the effectiveness measure, and develop tests that model relevance directly. In this work we present such an approach, showing theoretically that modeling relevance in this way naturally gives rise to the effectiveness measures we care about. We demonstrate the usefulness of our model on both simulated data and a diverse set of runs from various TREC tracks.