NUIG-DSI’s submission to The GEM Benchmark 2021

Nivranshu Pasricha, Mihael Arcan, P. Buitelaar
{"title":"NUIG-DSI’s submission to The GEM Benchmark 2021","authors":"Nivranshu Pasricha, Mihael Arcan, P. Buitelaar","doi":"10.18653/v1/2021.gem-1.13","DOIUrl":null,"url":null,"abstract":"This paper describes the submission by NUIG-DSI to the GEM benchmark 2021. We participate in the modeling shared task where we submit outputs on four datasets for data-to-text generation, namely, DART, WebNLG (en), E2E and CommonGen. We follow an approach similar to the one described in the GEM benchmark paper where we use the pre-trained T5-base model for our submission. We train this model on additional monolingual data where we experiment with different masking strategies specifically focused on masking entities, predicates and concepts as well as a random masking strategy for pre-training. In our results we find that random masking performs the best in terms of automatic evaluation metrics, though the results are not statistically significantly different compared to other masking strategies.","PeriodicalId":431658,"journal":{"name":"Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2021.gem-1.13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper describes the submission by NUIG-DSI to the GEM benchmark 2021. We participate in the modeling shared task where we submit outputs on four datasets for data-to-text generation, namely, DART, WebNLG (en), E2E and CommonGen. We follow an approach similar to the one described in the GEM benchmark paper where we use the pre-trained T5-base model for our submission. We train this model on additional monolingual data where we experiment with different masking strategies specifically focused on masking entities, predicates and concepts as well as a random masking strategy for pre-training. In our results we find that random masking performs the best in terms of automatic evaluation metrics, though the results are not statistically significantly different compared to other masking strategies.
NUIG-DSI提交给创业板基准2021
本文描述了NUIG-DSI提交给GEM基准2021的情况。我们参与了建模共享任务,我们提交了四个数据集的输出,用于数据到文本的生成,即DART、WebNLG (en)、E2E和commonen。我们遵循类似于GEM基准论文中描述的方法,我们在提交中使用预训练的t5基模型。我们在额外的单语数据上训练这个模型,在那里我们实验了不同的掩蔽策略,特别关注掩蔽实体、谓词和概念,以及用于预训练的随机掩蔽策略。在我们的结果中,我们发现随机掩蔽在自动评估指标方面表现最好,尽管结果与其他掩蔽策略相比没有统计学上的显着差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信