Kolmogorov-Arnold Networks in Low-Data Regimes: A Comparative Study with Multilayer Perceptrons

Farhad Pourkamali-Anaraki
{"title":"Kolmogorov-Arnold Networks in Low-Data Regimes: A Comparative Study with Multilayer Perceptrons","authors":"Farhad Pourkamali-Anaraki","doi":"arxiv-2409.10463","DOIUrl":null,"url":null,"abstract":"Multilayer Perceptrons (MLPs) have long been a cornerstone in deep learning,\nknown for their capacity to model complex relationships. Recently,\nKolmogorov-Arnold Networks (KANs) have emerged as a compelling alternative,\nutilizing highly flexible learnable activation functions directly on network\nedges, a departure from the neuron-centric approach of MLPs. However, KANs\nsignificantly increase the number of learnable parameters, raising concerns\nabout their effectiveness in data-scarce environments. This paper presents a\ncomprehensive comparative study of MLPs and KANs from both algorithmic and\nexperimental perspectives, with a focus on low-data regimes. We introduce an\neffective technique for designing MLPs with unique, parameterized activation\nfunctions for each neuron, enabling a more balanced comparison with KANs. Using\nempirical evaluations on simulated data and two real-world data sets from\nmedicine and engineering, we explore the trade-offs between model complexity\nand accuracy, with particular attention to the role of network depth. Our\nfindings show that MLPs with individualized activation functions achieve\nsignificantly higher predictive accuracy with only a modest increase in\nparameters, especially when the sample size is limited to around one hundred.\nFor example, in a three-class classification problem within additive\nmanufacturing, MLPs achieve a median accuracy of 0.91, significantly\noutperforming KANs, which only reach a median accuracy of 0.53 with default\nhyperparameters. These results offer valuable insights into the impact of\nactivation function selection in neural networks.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Multilayer Perceptrons (MLPs) have long been a cornerstone in deep learning, known for their capacity to model complex relationships. Recently, Kolmogorov-Arnold Networks (KANs) have emerged as a compelling alternative, utilizing highly flexible learnable activation functions directly on network edges, a departure from the neuron-centric approach of MLPs. However, KANs significantly increase the number of learnable parameters, raising concerns about their effectiveness in data-scarce environments. This paper presents a comprehensive comparative study of MLPs and KANs from both algorithmic and experimental perspectives, with a focus on low-data regimes. We introduce an effective technique for designing MLPs with unique, parameterized activation functions for each neuron, enabling a more balanced comparison with KANs. Using empirical evaluations on simulated data and two real-world data sets from medicine and engineering, we explore the trade-offs between model complexity and accuracy, with particular attention to the role of network depth. Our findings show that MLPs with individualized activation functions achieve significantly higher predictive accuracy with only a modest increase in parameters, especially when the sample size is limited to around one hundred. For example, in a three-class classification problem within additive manufacturing, MLPs achieve a median accuracy of 0.91, significantly outperforming KANs, which only reach a median accuracy of 0.53 with default hyperparameters. These results offer valuable insights into the impact of activation function selection in neural networks.
低数据模式下的科尔莫戈罗夫-阿诺德网络:与多层感知器的比较研究
长期以来,多层感知器(MLP)一直是深度学习的基石,因其建模复杂关系的能力而闻名。最近,Kolmogorov-Arnold 网络(KANs)作为一种引人注目的替代方案出现了,它直接在网络边上利用高度灵活的可学习激活函数,一改 MLPs 以神经元为中心的方法。然而,KAN 显著增加了可学习参数的数量,这引起了人们对其在数据稀缺环境中有效性的担忧。本文从算法和实验两个角度对 MLP 和 KAN 进行了全面的比较研究,重点关注低数据环境。我们介绍了一种设计 MLP 的有效技术,该技术为每个神经元设计了独特的参数化激活函数,从而能够与 KAN 进行更均衡的比较。通过对模拟数据以及医学和工程学两个真实世界数据集的经验评估,我们探讨了模型复杂性和准确性之间的权衡,尤其关注了网络深度的作用。我们的研究结果表明,具有个性化激活函数的 MLP 只需适度增加参数,就能显著提高预测准确率,尤其是当样本量限制在 100 个左右时。例如,在添加剂制造的三类分类问题中,MLP 的中位准确率达到了 0.91,明显优于 KAN,而 KAN 在使用默认参数时的中位准确率仅为 0.53。这些结果对神经网络中激活函数选择的影响提供了宝贵的启示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信