引用表达式生成算法的鲁棒性评价

Pablo Duboue, Martín Ariel Domínguez, Paula Estrella
{"title":"引用表达式生成算法的鲁棒性评价","authors":"Pablo Duboue, Martín Ariel Domínguez, Paula Estrella","doi":"10.1109/MICAI.2015.10","DOIUrl":null,"url":null,"abstract":"A sub-task of Natural Language Generation (NLG) is the generation of referring expressions (REG). REG algorithms are expected to select attributes that unambiguously identify an entity with respect to a set of distractors. In previous work we have defined a methodology to evaluate REG algorithms using real life examples. In the present work, we evaluate REG algorithms using a dataset that contains alterations in the properties of referring entities. The ability to operate on inputs with various degrees of error is cornerstone to Natural Language Understanding (NLU) algorithms. In NLG, however, many algorithms assume their inputs are sound and correct. For data, we use different versions of DBpedia, which is a freely available knowledge base containing information extracted from Wikipedia pages. We found out that most algorithms are robust over multi-year differences in the data. The ultimate goal of this work is observing the behaviour and estimating the performance of a series of REG algorithms as the entities in the data set evolve over time.","PeriodicalId":448255,"journal":{"name":"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)","volume":"22 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Evaluating Robustness of Referring Expression Generation Algorithms\",\"authors\":\"Pablo Duboue, Martín Ariel Domínguez, Paula Estrella\",\"doi\":\"10.1109/MICAI.2015.10\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A sub-task of Natural Language Generation (NLG) is the generation of referring expressions (REG). REG algorithms are expected to select attributes that unambiguously identify an entity with respect to a set of distractors. In previous work we have defined a methodology to evaluate REG algorithms using real life examples. In the present work, we evaluate REG algorithms using a dataset that contains alterations in the properties of referring entities. The ability to operate on inputs with various degrees of error is cornerstone to Natural Language Understanding (NLU) algorithms. In NLG, however, many algorithms assume their inputs are sound and correct. For data, we use different versions of DBpedia, which is a freely available knowledge base containing information extracted from Wikipedia pages. We found out that most algorithms are robust over multi-year differences in the data. The ultimate goal of this work is observing the behaviour and estimating the performance of a series of REG algorithms as the entities in the data set evolve over time.\",\"PeriodicalId\":448255,\"journal\":{\"name\":\"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)\",\"volume\":\"22 3\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MICAI.2015.10\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MICAI.2015.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

自然语言生成(NLG)的子任务是引用表达式的生成(REG)。期望REG算法能够选择相对于一组干扰物明确识别实体的属性。在之前的工作中,我们已经定义了一种方法来使用现实生活中的例子来评估REG算法。在目前的工作中,我们使用包含引用实体属性变化的数据集来评估REG算法。对具有不同程度错误的输入进行操作的能力是自然语言理解(NLU)算法的基石。然而,在NLG中,许多算法假设它们的输入是健全和正确的。对于数据,我们使用不同版本的DBpedia,这是一个免费提供的知识库,包含从Wikipedia页面提取的信息。我们发现,大多数算法在数据的多年差异中都是稳健的。这项工作的最终目标是随着数据集中的实体随着时间的推移而演变,观察一系列REG算法的行为并估计其性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluating Robustness of Referring Expression Generation Algorithms
A sub-task of Natural Language Generation (NLG) is the generation of referring expressions (REG). REG algorithms are expected to select attributes that unambiguously identify an entity with respect to a set of distractors. In previous work we have defined a methodology to evaluate REG algorithms using real life examples. In the present work, we evaluate REG algorithms using a dataset that contains alterations in the properties of referring entities. The ability to operate on inputs with various degrees of error is cornerstone to Natural Language Understanding (NLU) algorithms. In NLG, however, many algorithms assume their inputs are sound and correct. For data, we use different versions of DBpedia, which is a freely available knowledge base containing information extracted from Wikipedia pages. We found out that most algorithms are robust over multi-year differences in the data. The ultimate goal of this work is observing the behaviour and estimating the performance of a series of REG algorithms as the entities in the data set evolve over time.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信