{"title":"对Mira的思考:信息检索中的交互式评价","authors":"Mark D. Dunlop","doi":"10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1042%3E3.0.CO;2-7","DOIUrl":null,"url":null,"abstract":"Evaluation in information retrieval (IR) has focussed largely on noninteractive evaluation of text retrieval systems. This is increasingly at odds with how people use modern IR systems: in highly interactive settings to access linked, multimedia information. Furthermore, this approach ignores potential improvements through better interface design. In 1996 the Commission of the European Union Information Technologies Programme, funded a three year working group, Mira, to discuss and advance research in the area of evaluation frameworks for interactive and multimedia IR applications. Led by Keith van Rijsbergen, Steve Draper and myself from Glasgow University, this working group brought together many of the leading researchers in the evaluation domain from both the IR and human computer interaction (HCI) communities. This paper presents my personal view of the main lines of discussion that took place throughout Mira: importing and adapting evaluation techniques from HCI, evaluating at different levels as appropriate, evaluating against different types of relevance and the new challenges that drive the need for rethinking the old evaluation approaches. The paper concludes that we need to consider more varied forms of evaluation to complement engine evaluation.","PeriodicalId":50013,"journal":{"name":"Journal of the American Society for Information Science and Technology","volume":"24 1","pages":"1269-1274"},"PeriodicalIF":0.0000,"publicationDate":"2000-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"34","resultStr":"{\"title\":\"Reflections on Mira: Interactive evaluation in information retrieval\",\"authors\":\"Mark D. Dunlop\",\"doi\":\"10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1042%3E3.0.CO;2-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evaluation in information retrieval (IR) has focussed largely on noninteractive evaluation of text retrieval systems. This is increasingly at odds with how people use modern IR systems: in highly interactive settings to access linked, multimedia information. Furthermore, this approach ignores potential improvements through better interface design. In 1996 the Commission of the European Union Information Technologies Programme, funded a three year working group, Mira, to discuss and advance research in the area of evaluation frameworks for interactive and multimedia IR applications. Led by Keith van Rijsbergen, Steve Draper and myself from Glasgow University, this working group brought together many of the leading researchers in the evaluation domain from both the IR and human computer interaction (HCI) communities. This paper presents my personal view of the main lines of discussion that took place throughout Mira: importing and adapting evaluation techniques from HCI, evaluating at different levels as appropriate, evaluating against different types of relevance and the new challenges that drive the need for rethinking the old evaluation approaches. The paper concludes that we need to consider more varied forms of evaluation to complement engine evaluation.\",\"PeriodicalId\":50013,\"journal\":{\"name\":\"Journal of the American Society for Information Science and Technology\",\"volume\":\"24 1\",\"pages\":\"1269-1274\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"34\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the American Society for Information Science and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1042%3E3.0.CO;2-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American Society for Information Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1042%3E3.0.CO;2-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 34
摘要
信息检索中的评价主要集中在文本检索系统的非交互式评价上。这与人们使用现代IR系统的方式越来越不一致:在高度互动的环境中访问链接的多媒体信息。此外,这种方法忽略了通过更好的界面设计所带来的潜在改进。1996年,欧洲联盟信息技术方案委员会资助了一个为期三年的工作组Mira,讨论和推进交互式和多媒体信息技术应用评价框架领域的研究。这个工作组由格拉斯哥大学的Keith van Rijsbergen、Steve Draper和我自己领导,汇集了许多来自IR和人机交互(HCI)社区的评估领域的主要研究人员。本文介绍了我个人对Mira讨论主线的看法:从HCI引进和调整评估技术,适当地在不同层次进行评估,针对不同类型的相关性进行评估,以及推动重新思考旧评估方法的新挑战。本文的结论是,我们需要考虑更多样化的评估形式来补充发动机的评估。
Reflections on Mira: Interactive evaluation in information retrieval
Evaluation in information retrieval (IR) has focussed largely on noninteractive evaluation of text retrieval systems. This is increasingly at odds with how people use modern IR systems: in highly interactive settings to access linked, multimedia information. Furthermore, this approach ignores potential improvements through better interface design. In 1996 the Commission of the European Union Information Technologies Programme, funded a three year working group, Mira, to discuss and advance research in the area of evaluation frameworks for interactive and multimedia IR applications. Led by Keith van Rijsbergen, Steve Draper and myself from Glasgow University, this working group brought together many of the leading researchers in the evaluation domain from both the IR and human computer interaction (HCI) communities. This paper presents my personal view of the main lines of discussion that took place throughout Mira: importing and adapting evaluation techniques from HCI, evaluating at different levels as appropriate, evaluating against different types of relevance and the new challenges that drive the need for rethinking the old evaluation approaches. The paper concludes that we need to consider more varied forms of evaluation to complement engine evaluation.