{"title":"Exponential sampling type neural network Kantorovich operators based on Hadamard fractional integral","authors":"Purshottam N. Agrawal, Behar Baxhaku","doi":"10.1007/s13540-025-00418-0","DOIUrl":null,"url":null,"abstract":"<p>This study introduces a novel family of exponential sampling type neural network Kantorovich operators, leveraging Hadamard fractional integrals to significantly enhance function approximation capabilities. By incorporating a flexible parameter <span>\\(\\alpha \\)</span>, derived from fractional Hadamard integrals, and utilizing exponential sampling, introduced to tackle exponentially sampled data, our operators address critical limitations of existing methods, providing substantial improvements in approximation accuracy. We establish fundamental convergence theorems for continuous functions and demonstrate effectiveness in <i>p</i>th Lebesgue integrable spaces. Approximation degrees are quantified using logarithmic moduli of continuity, asymptotic expansions, and Peetre’s <i>K</i>-functional for <i>r</i>-times continuously differentiable functions. A Voronovskaja-type theorem confirms higher-order convergence via linear combinations. Extensions to multivariate cases are proven for convergence in <span>\\({L}_{{p}}\\)</span>-spaces <span>\\((1\\le {p}<\\infty ).\\)</span> MATLAB algorithms and illustrative examples validate theoretical findings, confirming convergence, computational efficiency, and operator consistency. We analyze the impact of various sigmoidal activation functions on approximation errors, presented via tables and graphs for one and two-dimensional cases. To demonstrate practical utility, we apply these operators to image scaling, focusing on the “Butterfly” dataset. With fractional parameter <span>\\(\\alpha =2\\)</span>, our operators, activated by a parametric sigmoid function, consistently outperform standard interpolation methods. Significant improvements in Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) are observed at <span>\\({m}=128\\)</span>, highlighting the operators’ efficacy in preserving image quality during upscaling. These results, combining theoretical rigor, computational validation, and practical application to image scaling, showcase the performance advantage of our proposed operators. By integrating fractional calculus and neural network theory, this work advances constructive approximation and image processing.</p>","PeriodicalId":48928,"journal":{"name":"Fractional Calculus and Applied Analysis","volume":"3 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fractional Calculus and Applied Analysis","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s13540-025-00418-0","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
This study introduces a novel family of exponential sampling type neural network Kantorovich operators, leveraging Hadamard fractional integrals to significantly enhance function approximation capabilities. By incorporating a flexible parameter \(\alpha \), derived from fractional Hadamard integrals, and utilizing exponential sampling, introduced to tackle exponentially sampled data, our operators address critical limitations of existing methods, providing substantial improvements in approximation accuracy. We establish fundamental convergence theorems for continuous functions and demonstrate effectiveness in pth Lebesgue integrable spaces. Approximation degrees are quantified using logarithmic moduli of continuity, asymptotic expansions, and Peetre’s K-functional for r-times continuously differentiable functions. A Voronovskaja-type theorem confirms higher-order convergence via linear combinations. Extensions to multivariate cases are proven for convergence in \({L}_{{p}}\)-spaces \((1\le {p}<\infty ).\) MATLAB algorithms and illustrative examples validate theoretical findings, confirming convergence, computational efficiency, and operator consistency. We analyze the impact of various sigmoidal activation functions on approximation errors, presented via tables and graphs for one and two-dimensional cases. To demonstrate practical utility, we apply these operators to image scaling, focusing on the “Butterfly” dataset. With fractional parameter \(\alpha =2\), our operators, activated by a parametric sigmoid function, consistently outperform standard interpolation methods. Significant improvements in Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) are observed at \({m}=128\), highlighting the operators’ efficacy in preserving image quality during upscaling. These results, combining theoretical rigor, computational validation, and practical application to image scaling, showcase the performance advantage of our proposed operators. By integrating fractional calculus and neural network theory, this work advances constructive approximation and image processing.
期刊介绍:
Fractional Calculus and Applied Analysis (FCAA, abbreviated in the World databases as Fract. Calc. Appl. Anal. or FRACT CALC APPL ANAL) is a specialized international journal for theory and applications of an important branch of Mathematical Analysis (Calculus) where differentiations and integrations can be of arbitrary non-integer order. The high standards of its contents are guaranteed by the prominent members of Editorial Board and the expertise of invited external reviewers, and proven by the recently achieved high values of impact factor (JIF) and impact rang (SJR), launching the journal to top places of the ranking lists of Thomson Reuters and Scopus.