{"title":"You Might Not Need Attention Diagonals","authors":"Yiming Cui;Xin Yao;Shijin Wang;Guoping Hu","doi":"10.1109/LSP.2025.3601497","DOIUrl":null,"url":null,"abstract":"Pre-trained language models, such as GPT, BERT, have revolutionized natural language processing tasks across various fields. However, the current multi-head self-attention mechanisms in these models exhibit an “over self-confidence” issue, which has been underexplored in prior research, causing the model to attend heavily to itself rather than other tokens. In this study, we propose a simple yet efficient solution: discarding diagonal elements in the attention matrix, allowing the model to focus more on other tokens. Our experiments reveal that the proposed approach not only consistently improves upon vanilla attention in transformer models for diverse natural language understanding tasks, particularly for smaller models in resource-limited conditions, but also exhibits faster convergence in training speed. This effectiveness generalizes well across different languages, model types, and various natural language understanding tasks, while requiring almost no additional computation. Our findings challenge previous assumptions about multi-head self-attention and suggest a promising direction for developing more effective pre-trained language models.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3435-3439"},"PeriodicalIF":3.9000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11132084/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Pre-trained language models, such as GPT, BERT, have revolutionized natural language processing tasks across various fields. However, the current multi-head self-attention mechanisms in these models exhibit an “over self-confidence” issue, which has been underexplored in prior research, causing the model to attend heavily to itself rather than other tokens. In this study, we propose a simple yet efficient solution: discarding diagonal elements in the attention matrix, allowing the model to focus more on other tokens. Our experiments reveal that the proposed approach not only consistently improves upon vanilla attention in transformer models for diverse natural language understanding tasks, particularly for smaller models in resource-limited conditions, but also exhibits faster convergence in training speed. This effectiveness generalizes well across different languages, model types, and various natural language understanding tasks, while requiring almost no additional computation. Our findings challenge previous assumptions about multi-head self-attention and suggest a promising direction for developing more effective pre-trained language models.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.