Ecological Validity and the Evaluation of Speech Summarization Quality

Anthony McCallum, Gerald Penn, Cosmin Munteanu, Xiaodan Zhu
{"title":"Ecological Validity and the Evaluation of Speech Summarization Quality","authors":"Anthony McCallum, Gerald Penn, Cosmin Munteanu, Xiaodan Zhu","doi":"10.1109/SLT.2012.6424269","DOIUrl":null,"url":null,"abstract":"There is little evidence of widespread adoption of speech summarization systems. This may be due in part to the fact that the natural language heuristics used to generate summaries are often optimized with respect to a class of evaluation measures that, while computationally and experimentally inexpensive, rely on subjectively selected gold standards against which automatically generated summaries are scored. This evaluation protocol does not take into account the usefulness of a summary in assisting the listener in achieving his or her goal. In this paper we study how current measures and methods for evaluating summarization systems compare to human-centric evaluation criteria. For this, we have designed and conducted an ecologically valid evaluation that determines the value of a summary when embedded in a task, rather than how closely a summary resembles a gold standard. The results of our evaluation demonstrate that in the domain of lecture summarization, the well-known baseline of maximal marginal relevance [1] is statistically significantly worse than human-generated extractive summaries, and even worse than having no summary at all in a simple quiz-taking task. Priming seems to have no statistically significant effect on the usefulness of the human summaries. This is interesting because priming had been proposed as a technique for increasing kappa scores and/or maintaining goal orientation among summary authors. In addition, our results suggest that ROUGE scores, regardless of whether they are derived from numerically-ranked reference data or ecologically valid human-extracted summaries, may not always be reliable as inexpensive proxies for task-embedded evaluations. In fact, under some conditions, relying exclusively on ROUGE may lead to scoring human-generated summaries very favourably even when a task-embedded score calls their usefulness into question relative to using no summaries at all.","PeriodicalId":375378,"journal":{"name":"2012 IEEE Spoken Language Technology Workshop (SLT)","volume":"300 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT.2012.6424269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

There is little evidence of widespread adoption of speech summarization systems. This may be due in part to the fact that the natural language heuristics used to generate summaries are often optimized with respect to a class of evaluation measures that, while computationally and experimentally inexpensive, rely on subjectively selected gold standards against which automatically generated summaries are scored. This evaluation protocol does not take into account the usefulness of a summary in assisting the listener in achieving his or her goal. In this paper we study how current measures and methods for evaluating summarization systems compare to human-centric evaluation criteria. For this, we have designed and conducted an ecologically valid evaluation that determines the value of a summary when embedded in a task, rather than how closely a summary resembles a gold standard. The results of our evaluation demonstrate that in the domain of lecture summarization, the well-known baseline of maximal marginal relevance [1] is statistically significantly worse than human-generated extractive summaries, and even worse than having no summary at all in a simple quiz-taking task. Priming seems to have no statistically significant effect on the usefulness of the human summaries. This is interesting because priming had been proposed as a technique for increasing kappa scores and/or maintaining goal orientation among summary authors. In addition, our results suggest that ROUGE scores, regardless of whether they are derived from numerically-ranked reference data or ecologically valid human-extracted summaries, may not always be reliable as inexpensive proxies for task-embedded evaluations. In fact, under some conditions, relying exclusively on ROUGE may lead to scoring human-generated summaries very favourably even when a task-embedded score calls their usefulness into question relative to using no summaries at all.
生态效度与语音摘要质量评价
几乎没有证据表明语音摘要系统被广泛采用。这可能部分是由于用于生成摘要的自然语言启发式通常针对一类评估措施进行了优化,这些措施虽然在计算和实验上都不昂贵,但依赖于主观选择的黄金标准,自动生成的摘要根据这些标准进行评分。这个评估方案没有考虑到摘要在帮助听者实现他或她的目标方面的有用性。本文研究了当前评价摘要系统的措施和方法与以人为中心的评价标准的比较。为此,我们设计并实施了一种生态学上有效的评估,该评估确定了摘要嵌入任务时的价值,而不是摘要与黄金标准的接近程度。我们的评估结果表明,在讲座总结领域,众所周知的最大边际相关性基线[1]在统计上明显差于人工生成的提取摘要,甚至比在简单的测试任务中根本没有摘要还要差。启动似乎对人类总结的有用性没有统计学上的显著影响。这很有趣,因为在总结作者中,启动被认为是一种提高kappa分数和/或维持目标取向的技术。此外,我们的研究结果表明,ROUGE分数,无论它们是来自数字排名的参考数据还是生态有效的人类提取的摘要,可能并不总是可靠的,作为任务嵌入评估的廉价代理。事实上,在某些情况下,完全依赖ROUGE可能会导致对人工生成的摘要进行非常有利的评分,即使任务嵌入的分数使它们的有用性受到质疑,而不是完全使用摘要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信