Pablo Duboue, Martín Ariel Domínguez, Paula Estrella
{"title":"Evaluating Robustness of Referring Expression Generation Algorithms","authors":"Pablo Duboue, Martín Ariel Domínguez, Paula Estrella","doi":"10.1109/MICAI.2015.10","DOIUrl":null,"url":null,"abstract":"A sub-task of Natural Language Generation (NLG) is the generation of referring expressions (REG). REG algorithms are expected to select attributes that unambiguously identify an entity with respect to a set of distractors. In previous work we have defined a methodology to evaluate REG algorithms using real life examples. In the present work, we evaluate REG algorithms using a dataset that contains alterations in the properties of referring entities. The ability to operate on inputs with various degrees of error is cornerstone to Natural Language Understanding (NLU) algorithms. In NLG, however, many algorithms assume their inputs are sound and correct. For data, we use different versions of DBpedia, which is a freely available knowledge base containing information extracted from Wikipedia pages. We found out that most algorithms are robust over multi-year differences in the data. The ultimate goal of this work is observing the behaviour and estimating the performance of a series of REG algorithms as the entities in the data set evolve over time.","PeriodicalId":448255,"journal":{"name":"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)","volume":"22 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MICAI.2015.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
A sub-task of Natural Language Generation (NLG) is the generation of referring expressions (REG). REG algorithms are expected to select attributes that unambiguously identify an entity with respect to a set of distractors. In previous work we have defined a methodology to evaluate REG algorithms using real life examples. In the present work, we evaluate REG algorithms using a dataset that contains alterations in the properties of referring entities. The ability to operate on inputs with various degrees of error is cornerstone to Natural Language Understanding (NLU) algorithms. In NLG, however, many algorithms assume their inputs are sound and correct. For data, we use different versions of DBpedia, which is a freely available knowledge base containing information extracted from Wikipedia pages. We found out that most algorithms are robust over multi-year differences in the data. The ultimate goal of this work is observing the behaviour and estimating the performance of a series of REG algorithms as the entities in the data set evolve over time.