Hadamard估计注意变压器(HEAT):利用Hadamard积的低秩投影快速逼近变压器的点积自注意

Jasper Kyle Catapang
{"title":"Hadamard估计注意变压器(HEAT):利用Hadamard积的低秩投影快速逼近变压器的点积自注意","authors":"Jasper Kyle Catapang","doi":"10.1109/ISCMI56532.2022.10068484","DOIUrl":null,"url":null,"abstract":"In this paper, the author proposes a new transformer model called Hadamard Estimated Attention Transformer or HEAT, that utilizes a low-rank projection of the Hadamard product to approximate the self-attention mechanism in standard transformer architectures and thus aiming to speedup transformer training, finetuning, and inference altogether. The study shows how it is significantly better than the original transformer that uses dot product self-attention by offering a faster way to compute the original self-attention mechanism while maintaining and ultimately surpassing the quality of the original transformer architecture. It also bests Linformer and Nyströmformer in several machine translation tasks while matching and even outperforming Nyströmformer's accuracy in various text classification tasks.","PeriodicalId":340397,"journal":{"name":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"316 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hadamard Estimated Attention Transformer (HEAT): Fast Approximation of Dot Product Self-attention for Transformers Using Low-Rank Projection of Hadamard Product\",\"authors\":\"Jasper Kyle Catapang\",\"doi\":\"10.1109/ISCMI56532.2022.10068484\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, the author proposes a new transformer model called Hadamard Estimated Attention Transformer or HEAT, that utilizes a low-rank projection of the Hadamard product to approximate the self-attention mechanism in standard transformer architectures and thus aiming to speedup transformer training, finetuning, and inference altogether. The study shows how it is significantly better than the original transformer that uses dot product self-attention by offering a faster way to compute the original self-attention mechanism while maintaining and ultimately surpassing the quality of the original transformer architecture. It also bests Linformer and Nyströmformer in several machine translation tasks while matching and even outperforming Nyströmformer's accuracy in various text classification tasks.\",\"PeriodicalId\":340397,\"journal\":{\"name\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"volume\":\"316 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCMI56532.2022.10068484\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI56532.2022.10068484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,作者提出了一个新的变压器模型,称为Hadamard估计注意变压器或HEAT,它利用Hadamard乘积的低秩投影来近似标准变压器架构中的自注意机制,从而旨在加速变压器的训练,微调和推理。该研究表明,通过提供一种更快的方法来计算原始自注意机制,同时保持并最终超越原始变压器架构的质量,它如何明显优于使用点积自注意的原始变压器。它还在几个机器翻译任务中优于Linformer和Nyströmformer,而在各种文本分类任务中匹配甚至超过Nyströmformer的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Hadamard Estimated Attention Transformer (HEAT): Fast Approximation of Dot Product Self-attention for Transformers Using Low-Rank Projection of Hadamard Product
In this paper, the author proposes a new transformer model called Hadamard Estimated Attention Transformer or HEAT, that utilizes a low-rank projection of the Hadamard product to approximate the self-attention mechanism in standard transformer architectures and thus aiming to speedup transformer training, finetuning, and inference altogether. The study shows how it is significantly better than the original transformer that uses dot product self-attention by offering a faster way to compute the original self-attention mechanism while maintaining and ultimately surpassing the quality of the original transformer architecture. It also bests Linformer and Nyströmformer in several machine translation tasks while matching and even outperforming Nyströmformer's accuracy in various text classification tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信