{"title":"柯尔莫哥洛夫-阿诺德变换器","authors":"Xingyi Yang, Xinchao Wang","doi":"arxiv-2409.10594","DOIUrl":null,"url":null,"abstract":"Transformers stand as the cornerstone of mordern deep learning.\nTraditionally, these models rely on multi-layer perceptron (MLP) layers to mix\nthe information between channels. In this paper, we introduce the\nKolmogorov-Arnold Transformer (KAT), a novel architecture that replaces MLP\nlayers with Kolmogorov-Arnold Network (KAN) layers to enhance the\nexpressiveness and performance of the model. Integrating KANs into\ntransformers, however, is no easy feat, especially when scaled up.\nSpecifically, we identify three key challenges: (C1) Base function. The\nstandard B-spline function used in KANs is not optimized for parallel computing\non modern hardware, resulting in slower inference speeds. (C2) Parameter and\nComputation Inefficiency. KAN requires a unique function for each input-output\npair, making the computation extremely large. (C3) Weight initialization. The\ninitialization of weights in KANs is particularly challenging due to their\nlearnable activation functions, which are critical for achieving convergence in\ndeep neural networks. To overcome the aforementioned challenges, we propose\nthree key solutions: (S1) Rational basis. We replace B-spline functions with\nrational functions to improve compatibility with modern GPUs. By implementing\nthis in CUDA, we achieve faster computations. (S2) Group KAN. We share the\nactivation weights through a group of neurons, to reduce the computational load\nwithout sacrificing performance. (S3) Variance-preserving initialization. We\ncarefully initialize the activation weights to make sure that the activation\nvariance is maintained across layers. With these designs, KAT scales\neffectively and readily outperforms traditional MLP-based transformers.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"105 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Kolmogorov-Arnold Transformer\",\"authors\":\"Xingyi Yang, Xinchao Wang\",\"doi\":\"arxiv-2409.10594\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Transformers stand as the cornerstone of mordern deep learning.\\nTraditionally, these models rely on multi-layer perceptron (MLP) layers to mix\\nthe information between channels. In this paper, we introduce the\\nKolmogorov-Arnold Transformer (KAT), a novel architecture that replaces MLP\\nlayers with Kolmogorov-Arnold Network (KAN) layers to enhance the\\nexpressiveness and performance of the model. Integrating KANs into\\ntransformers, however, is no easy feat, especially when scaled up.\\nSpecifically, we identify three key challenges: (C1) Base function. The\\nstandard B-spline function used in KANs is not optimized for parallel computing\\non modern hardware, resulting in slower inference speeds. (C2) Parameter and\\nComputation Inefficiency. KAN requires a unique function for each input-output\\npair, making the computation extremely large. (C3) Weight initialization. The\\ninitialization of weights in KANs is particularly challenging due to their\\nlearnable activation functions, which are critical for achieving convergence in\\ndeep neural networks. To overcome the aforementioned challenges, we propose\\nthree key solutions: (S1) Rational basis. We replace B-spline functions with\\nrational functions to improve compatibility with modern GPUs. By implementing\\nthis in CUDA, we achieve faster computations. (S2) Group KAN. We share the\\nactivation weights through a group of neurons, to reduce the computational load\\nwithout sacrificing performance. (S3) Variance-preserving initialization. We\\ncarefully initialize the activation weights to make sure that the activation\\nvariance is maintained across layers. With these designs, KAT scales\\neffectively and readily outperforms traditional MLP-based transformers.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"105 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10594\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10594","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Transformers stand as the cornerstone of mordern deep learning.
Traditionally, these models rely on multi-layer perceptron (MLP) layers to mix
the information between channels. In this paper, we introduce the
Kolmogorov-Arnold Transformer (KAT), a novel architecture that replaces MLP
layers with Kolmogorov-Arnold Network (KAN) layers to enhance the
expressiveness and performance of the model. Integrating KANs into
transformers, however, is no easy feat, especially when scaled up.
Specifically, we identify three key challenges: (C1) Base function. The
standard B-spline function used in KANs is not optimized for parallel computing
on modern hardware, resulting in slower inference speeds. (C2) Parameter and
Computation Inefficiency. KAN requires a unique function for each input-output
pair, making the computation extremely large. (C3) Weight initialization. The
initialization of weights in KANs is particularly challenging due to their
learnable activation functions, which are critical for achieving convergence in
deep neural networks. To overcome the aforementioned challenges, we propose
three key solutions: (S1) Rational basis. We replace B-spline functions with
rational functions to improve compatibility with modern GPUs. By implementing
this in CUDA, we achieve faster computations. (S2) Group KAN. We share the
activation weights through a group of neurons, to reduce the computational load
without sacrificing performance. (S3) Variance-preserving initialization. We
carefully initialize the activation weights to make sure that the activation
variance is maintained across layers. With these designs, KAT scales
effectively and readily outperforms traditional MLP-based transformers.