{"title":"Kolmogorov-Arnold Networks in Low-Data Regimes: A Comparative Study with Multilayer Perceptrons","authors":"Farhad Pourkamali-Anaraki","doi":"arxiv-2409.10463","DOIUrl":null,"url":null,"abstract":"Multilayer Perceptrons (MLPs) have long been a cornerstone in deep learning,\nknown for their capacity to model complex relationships. Recently,\nKolmogorov-Arnold Networks (KANs) have emerged as a compelling alternative,\nutilizing highly flexible learnable activation functions directly on network\nedges, a departure from the neuron-centric approach of MLPs. However, KANs\nsignificantly increase the number of learnable parameters, raising concerns\nabout their effectiveness in data-scarce environments. This paper presents a\ncomprehensive comparative study of MLPs and KANs from both algorithmic and\nexperimental perspectives, with a focus on low-data regimes. We introduce an\neffective technique for designing MLPs with unique, parameterized activation\nfunctions for each neuron, enabling a more balanced comparison with KANs. Using\nempirical evaluations on simulated data and two real-world data sets from\nmedicine and engineering, we explore the trade-offs between model complexity\nand accuracy, with particular attention to the role of network depth. Our\nfindings show that MLPs with individualized activation functions achieve\nsignificantly higher predictive accuracy with only a modest increase in\nparameters, especially when the sample size is limited to around one hundred.\nFor example, in a three-class classification problem within additive\nmanufacturing, MLPs achieve a median accuracy of 0.91, significantly\noutperforming KANs, which only reach a median accuracy of 0.53 with default\nhyperparameters. These results offer valuable insights into the impact of\nactivation function selection in neural networks.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multilayer Perceptrons (MLPs) have long been a cornerstone in deep learning,
known for their capacity to model complex relationships. Recently,
Kolmogorov-Arnold Networks (KANs) have emerged as a compelling alternative,
utilizing highly flexible learnable activation functions directly on network
edges, a departure from the neuron-centric approach of MLPs. However, KANs
significantly increase the number of learnable parameters, raising concerns
about their effectiveness in data-scarce environments. This paper presents a
comprehensive comparative study of MLPs and KANs from both algorithmic and
experimental perspectives, with a focus on low-data regimes. We introduce an
effective technique for designing MLPs with unique, parameterized activation
functions for each neuron, enabling a more balanced comparison with KANs. Using
empirical evaluations on simulated data and two real-world data sets from
medicine and engineering, we explore the trade-offs between model complexity
and accuracy, with particular attention to the role of network depth. Our
findings show that MLPs with individualized activation functions achieve
significantly higher predictive accuracy with only a modest increase in
parameters, especially when the sample size is limited to around one hundred.
For example, in a three-class classification problem within additive
manufacturing, MLPs achieve a median accuracy of 0.91, significantly
outperforming KANs, which only reach a median accuracy of 0.53 with default
hyperparameters. These results offer valuable insights into the impact of
activation function selection in neural networks.