{"title":"KAN Policy: Learning Efficient and Smooth Robotic Trajectories via Kolmogorov-Arnold Networks","authors":"Zikang Chen;Fei Gao;Ziya Yu;Peng Li","doi":"10.1109/LRA.2025.3606354","DOIUrl":null,"url":null,"abstract":"Modernrobotic visuomotor policy learning has witnessed significant progress through Diffusion Policy (DP) frameworks built upon <italic>Convolutional Neural Networks</i> (CNNs) and Transformers. Despite their empirical success, these architectures remain fundamentally constrained by their relatively discrete computational nature, inherently limiting their capacity to generate efficient and smooth motion trajectories. To address this challenge, we introduce <italic>Kolmogorov-Arnold Networks</i> (KANs) into Diffusion Policy learning. The proposed <italic>KAN Policy</i> (KP) leverages KANs' intrinsic continuity through learnable base-parameterized activation functions, thereby producing continuous trajectories with shorter execution time and fewer jerks. Specifically, we design a novel <italic>Embedding KAN</i> (Emb-KAN) for CNN-based models, which preserves structural continuity in high-dimensional latent spaces through adaptive spline embeddings. Besides, we apply Group-KAN to Transformer-based models for learning continuous representations. Across main simulation experiments, KP achieves average improvements of 6.06%, 8.03%, and 26.4% in terms of success rate, execution time, and smoothness, respectively. Similarly, in real-world experiments, KP achieves average improvements of 53.8%, 7.89%, and 29.4% across the same metrics.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 11","pages":"11164-11171"},"PeriodicalIF":5.3000,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11151197/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Modernrobotic visuomotor policy learning has witnessed significant progress through Diffusion Policy (DP) frameworks built upon Convolutional Neural Networks (CNNs) and Transformers. Despite their empirical success, these architectures remain fundamentally constrained by their relatively discrete computational nature, inherently limiting their capacity to generate efficient and smooth motion trajectories. To address this challenge, we introduce Kolmogorov-Arnold Networks (KANs) into Diffusion Policy learning. The proposed KAN Policy (KP) leverages KANs' intrinsic continuity through learnable base-parameterized activation functions, thereby producing continuous trajectories with shorter execution time and fewer jerks. Specifically, we design a novel Embedding KAN (Emb-KAN) for CNN-based models, which preserves structural continuity in high-dimensional latent spaces through adaptive spline embeddings. Besides, we apply Group-KAN to Transformer-based models for learning continuous representations. Across main simulation experiments, KP achieves average improvements of 6.06%, 8.03%, and 26.4% in terms of success rate, execution time, and smoothness, respectively. Similarly, in real-world experiments, KP achieves average improvements of 53.8%, 7.89%, and 29.4% across the same metrics.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.