对Mira的思考:信息检索中的交互式评价

Mark D. Dunlop
{"title":"对Mira的思考:信息检索中的交互式评价","authors":"Mark D. Dunlop","doi":"10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1042%3E3.0.CO;2-7","DOIUrl":null,"url":null,"abstract":"Evaluation in information retrieval (IR) has focussed largely on noninteractive evaluation of text retrieval systems. This is increasingly at odds with how people use modern IR systems: in highly interactive settings to access linked, multimedia information. Furthermore, this approach ignores potential improvements through better interface design. In 1996 the Commission of the European Union Information Technologies Programme, funded a three year working group, Mira, to discuss and advance research in the area of evaluation frameworks for interactive and multimedia IR applications. Led by Keith van Rijsbergen, Steve Draper and myself from Glasgow University, this working group brought together many of the leading researchers in the evaluation domain from both the IR and human computer interaction (HCI) communities. This paper presents my personal view of the main lines of discussion that took place throughout Mira: importing and adapting evaluation techniques from HCI, evaluating at different levels as appropriate, evaluating against different types of relevance and the new challenges that drive the need for rethinking the old evaluation approaches. The paper concludes that we need to consider more varied forms of evaluation to complement engine evaluation.","PeriodicalId":50013,"journal":{"name":"Journal of the American Society for Information Science and Technology","volume":"24 1","pages":"1269-1274"},"PeriodicalIF":0.0000,"publicationDate":"2000-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"34","resultStr":"{\"title\":\"Reflections on Mira: Interactive evaluation in information retrieval\",\"authors\":\"Mark D. Dunlop\",\"doi\":\"10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1042%3E3.0.CO;2-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evaluation in information retrieval (IR) has focussed largely on noninteractive evaluation of text retrieval systems. This is increasingly at odds with how people use modern IR systems: in highly interactive settings to access linked, multimedia information. Furthermore, this approach ignores potential improvements through better interface design. In 1996 the Commission of the European Union Information Technologies Programme, funded a three year working group, Mira, to discuss and advance research in the area of evaluation frameworks for interactive and multimedia IR applications. Led by Keith van Rijsbergen, Steve Draper and myself from Glasgow University, this working group brought together many of the leading researchers in the evaluation domain from both the IR and human computer interaction (HCI) communities. This paper presents my personal view of the main lines of discussion that took place throughout Mira: importing and adapting evaluation techniques from HCI, evaluating at different levels as appropriate, evaluating against different types of relevance and the new challenges that drive the need for rethinking the old evaluation approaches. The paper concludes that we need to consider more varied forms of evaluation to complement engine evaluation.\",\"PeriodicalId\":50013,\"journal\":{\"name\":\"Journal of the American Society for Information Science and Technology\",\"volume\":\"24 1\",\"pages\":\"1269-1274\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"34\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the American Society for Information Science and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1042%3E3.0.CO;2-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American Society for Information Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1042%3E3.0.CO;2-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 34

摘要

信息检索中的评价主要集中在文本检索系统的非交互式评价上。这与人们使用现代IR系统的方式越来越不一致:在高度互动的环境中访问链接的多媒体信息。此外,这种方法忽略了通过更好的界面设计所带来的潜在改进。1996年,欧洲联盟信息技术方案委员会资助了一个为期三年的工作组Mira,讨论和推进交互式和多媒体信息技术应用评价框架领域的研究。这个工作组由格拉斯哥大学的Keith van Rijsbergen、Steve Draper和我自己领导,汇集了许多来自IR和人机交互(HCI)社区的评估领域的主要研究人员。本文介绍了我个人对Mira讨论主线的看法:从HCI引进和调整评估技术,适当地在不同层次进行评估,针对不同类型的相关性进行评估,以及推动重新思考旧评估方法的新挑战。本文的结论是,我们需要考虑更多样化的评估形式来补充发动机的评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reflections on Mira: Interactive evaluation in information retrieval
Evaluation in information retrieval (IR) has focussed largely on noninteractive evaluation of text retrieval systems. This is increasingly at odds with how people use modern IR systems: in highly interactive settings to access linked, multimedia information. Furthermore, this approach ignores potential improvements through better interface design. In 1996 the Commission of the European Union Information Technologies Programme, funded a three year working group, Mira, to discuss and advance research in the area of evaluation frameworks for interactive and multimedia IR applications. Led by Keith van Rijsbergen, Steve Draper and myself from Glasgow University, this working group brought together many of the leading researchers in the evaluation domain from both the IR and human computer interaction (HCI) communities. This paper presents my personal view of the main lines of discussion that took place throughout Mira: importing and adapting evaluation techniques from HCI, evaluating at different levels as appropriate, evaluating against different types of relevance and the new challenges that drive the need for rethinking the old evaluation approaches. The paper concludes that we need to consider more varied forms of evaluation to complement engine evaluation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
审稿时长
3.5 months
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信