Semformer: Transformer Language Models with Semantic Planning

Yongjing Yin, Junran Ding, Kai Song, Yue Zhang
{"title":"Semformer: Transformer Language Models with Semantic Planning","authors":"Yongjing Yin, Junran Ding, Kai Song, Yue Zhang","doi":"arxiv-2409.11143","DOIUrl":null,"url":null,"abstract":"Next-token prediction serves as the dominant component in current neural\nlanguage models. During the training phase, the model employs teacher forcing,\nwhich predicts tokens based on all preceding ground truth tokens. However, this\napproach has been found to create shortcuts, utilizing the revealed prefix to\nspuriously fit future tokens, potentially compromising the accuracy of the\nnext-token predictor. In this paper, we introduce Semformer, a novel method of\ntraining a Transformer language model that explicitly models the semantic\nplanning of response. Specifically, we incorporate a sequence of planning\ntokens into the prefix, guiding the planning token representations to predict\nthe latent semantic representations of the response, which are induced by an\nautoencoder. In a minimal planning task (i.e., graph path-finding), our model\nexhibits near-perfect performance and effectively mitigates shortcut learning,\na feat that standard training methods and baseline models have been unable to\naccomplish. Furthermore, we pretrain Semformer from scratch with 125M\nparameters, demonstrating its efficacy through measures of perplexity,\nin-context learning, and fine-tuning on summarization tasks.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"50 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Next-token prediction serves as the dominant component in current neural language models. During the training phase, the model employs teacher forcing, which predicts tokens based on all preceding ground truth tokens. However, this approach has been found to create shortcuts, utilizing the revealed prefix to spuriously fit future tokens, potentially compromising the accuracy of the next-token predictor. In this paper, we introduce Semformer, a novel method of training a Transformer language model that explicitly models the semantic planning of response. Specifically, we incorporate a sequence of planning tokens into the prefix, guiding the planning token representations to predict the latent semantic representations of the response, which are induced by an autoencoder. In a minimal planning task (i.e., graph path-finding), our model exhibits near-perfect performance and effectively mitigates shortcut learning, a feat that standard training methods and baseline models have been unable to accomplish. Furthermore, we pretrain Semformer from scratch with 125M parameters, demonstrating its efficacy through measures of perplexity, in-context learning, and fine-tuning on summarization tasks.
Semformer:具有语义规划功能的转换器语言模型
下一个标记预测是当前神经语言模型的主要组成部分。在训练阶段,模型采用教师强制法,即根据前面所有的地面实况标记来预测标记。然而,人们发现这种方法会产生捷径,利用揭示的前缀来错误地拟合未来的标记,从而可能影响下一标记预测器的准确性。在本文中,我们介绍了 Semformer,这是一种训练 Transformer 语言模型的新方法,它可以明确地模拟响应的语义规划。具体来说,我们在前缀中加入了一系列规划标记,引导规划标记表征预测反应的潜在语义表征,这些潜在语义表征是由自动编码器诱导的。在最小规划任务(即图路径查找)中,我们的模式lex 表现出近乎完美的性能,并有效地减少了捷径学习,而这正是标准训练方法和基线模型所无法实现的。此外,我们还用 1.25 亿个参数对 Semformer 进行了从头开始的预训练,通过在摘要任务中的困惑度测量、上下文学习和微调,证明了它的功效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信