Self-Attention Limits Working Memory Capacity of Transformer-Based Models

Dongyu Gong, Hantao Zhang
{"title":"Self-Attention Limits Working Memory Capacity of Transformer-Based Models","authors":"Dongyu Gong, Hantao Zhang","doi":"arxiv-2409.10715","DOIUrl":null,"url":null,"abstract":"Recent work on Transformer-based large language models (LLMs) has revealed\nstriking limits in their working memory capacity, similar to what has been\nfound in human behavioral studies. Specifically, these models' performance\ndrops significantly on N-back tasks as N increases. However, there is still a\nlack of mechanistic interpretability as to why this phenomenon would arise.\nInspired by the executive attention theory from behavioral sciences, we\nhypothesize that the self-attention mechanism within Transformer-based models\nmight be responsible for their working memory capacity limits. To test this\nhypothesis, we train vanilla decoder-only transformers to perform N-back tasks\nand find that attention scores gradually aggregate to the N-back positions over\ntraining, suggesting that the model masters the task by learning a strategy to\npay attention to the relationship between the current position and the N-back\nposition. Critically, we find that the total entropy of the attention score\nmatrix increases as N increases, suggesting that the dispersion of attention\nscores might be the cause of the capacity limit observed in N-back tasks.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10715","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent work on Transformer-based large language models (LLMs) has revealed striking limits in their working memory capacity, similar to what has been found in human behavioral studies. Specifically, these models' performance drops significantly on N-back tasks as N increases. However, there is still a lack of mechanistic interpretability as to why this phenomenon would arise. Inspired by the executive attention theory from behavioral sciences, we hypothesize that the self-attention mechanism within Transformer-based models might be responsible for their working memory capacity limits. To test this hypothesis, we train vanilla decoder-only transformers to perform N-back tasks and find that attention scores gradually aggregate to the N-back positions over training, suggesting that the model masters the task by learning a strategy to pay attention to the relationship between the current position and the N-back position. Critically, we find that the total entropy of the attention score matrix increases as N increases, suggesting that the dispersion of attention scores might be the cause of the capacity limit observed in N-back tasks.
自我关注限制了变压器模型的工作记忆能力
最近对基于变换器的大型语言模型(LLMs)的研究发现,这些模型的工作记忆能力有惊人的极限,这与人类行为研究中发现的情况类似。具体来说,随着 N 的增加,这些模型在 N 回溯任务中的表现会明显下降。受行为科学中执行注意理论的启发,我们假设基于变形金刚的模型中的自我注意机制可能是造成其工作记忆容量限制的原因。为了验证这一假设,我们训练香草解码器转换器执行N-后退任务,结果发现注意力分数在训练过程中逐渐聚集到N-后退位置,这表明模型通过学习一种策略来掌握任务,即注意当前位置和N-后退位置之间的关系。重要的是,我们发现注意力分数矩阵的总熵随着N的增加而增加,这表明注意力分数的分散可能是在N-back任务中观察到的容量限制的原因。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信