Continual Learning for Natural Language Generations with Transformer Calibration

Peng Yang, Dingcheng Li, Ping Li
{"title":"Continual Learning for Natural Language Generations with Transformer Calibration","authors":"Peng Yang, Dingcheng Li, Ping Li","doi":"10.18653/v1/2022.conll-1.4","DOIUrl":null,"url":null,"abstract":"Conventional natural language process (NLP) generation models are trained offline with a given dataset for a particular task, which is referred to as isolated learning. Research on sequence-to-sequence language generation aims to study continual learning model to constantly learning from sequentially encountered tasks. However, continual learning studies often suffer from catastrophic forgetting, a persistent challenge for lifelong learning. In this paper, we present a novel NLP transformer model that attempts to mitigate catastrophic forgetting in online continual learning from a new perspective, i.e., attention calibration. We model the attention in the transformer as a calibrated unit in a general formulation, where the attention calibration could give benefits to balance the stability and plasticity of continual learning algorithms through influencing both their forward inference path and backward optimization path. Our empirical experiments, paraphrase generation and dialog response generation, demonstrate that this work outperforms state-of-the-art models by a considerable margin and effectively mitigate the forgetting.","PeriodicalId":221345,"journal":{"name":"Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2022.conll-1.4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Conventional natural language process (NLP) generation models are trained offline with a given dataset for a particular task, which is referred to as isolated learning. Research on sequence-to-sequence language generation aims to study continual learning model to constantly learning from sequentially encountered tasks. However, continual learning studies often suffer from catastrophic forgetting, a persistent challenge for lifelong learning. In this paper, we present a novel NLP transformer model that attempts to mitigate catastrophic forgetting in online continual learning from a new perspective, i.e., attention calibration. We model the attention in the transformer as a calibrated unit in a general formulation, where the attention calibration could give benefits to balance the stability and plasticity of continual learning algorithms through influencing both their forward inference path and backward optimization path. Our empirical experiments, paraphrase generation and dialog response generation, demonstrate that this work outperforms state-of-the-art models by a considerable margin and effectively mitigate the forgetting.
变压器校准自然语言世代的持续学习
传统的自然语言过程(NLP)生成模型是针对特定任务使用给定数据集离线训练的,这被称为孤立学习。序列到序列语言生成研究的目的是研究连续学习模型,从顺序遇到的任务中不断学习。然而,持续的学习往往会遭受灾难性的遗忘,这是终身学习的一个持续挑战。在本文中,我们提出了一种新的NLP变压器模型,该模型试图从一个新的角度,即注意校准,来减轻在线持续学习中的灾难性遗忘。我们将变压器中的注意力建模为一般公式中的校准单元,其中注意力校准可以通过影响其前向推理路径和后向优化路径来平衡持续学习算法的稳定性和可塑性。我们的经验实验,释义生成和对话响应生成,证明了这项工作在相当程度上优于最先进的模型,并有效地减轻了遗忘。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信