Exponential sampling type neural network Kantorovich operators based on Hadamard fractional integral

IF 2.5 2区 数学 Q1 MATHEMATICS
Purshottam N. Agrawal, Behar Baxhaku
{"title":"Exponential sampling type neural network Kantorovich operators based on Hadamard fractional integral","authors":"Purshottam N. Agrawal, Behar Baxhaku","doi":"10.1007/s13540-025-00418-0","DOIUrl":null,"url":null,"abstract":"<p>This study introduces a novel family of exponential sampling type neural network Kantorovich operators, leveraging Hadamard fractional integrals to significantly enhance function approximation capabilities. By incorporating a flexible parameter <span>\\(\\alpha \\)</span>, derived from fractional Hadamard integrals, and utilizing exponential sampling, introduced to tackle exponentially sampled data, our operators address critical limitations of existing methods, providing substantial improvements in approximation accuracy. We establish fundamental convergence theorems for continuous functions and demonstrate effectiveness in <i>p</i>th Lebesgue integrable spaces. Approximation degrees are quantified using logarithmic moduli of continuity, asymptotic expansions, and Peetre’s <i>K</i>-functional for <i>r</i>-times continuously differentiable functions. A Voronovskaja-type theorem confirms higher-order convergence via linear combinations. Extensions to multivariate cases are proven for convergence in <span>\\({L}_{{p}}\\)</span>-spaces <span>\\((1\\le {p}&lt;\\infty ).\\)</span> MATLAB algorithms and illustrative examples validate theoretical findings, confirming convergence, computational efficiency, and operator consistency. We analyze the impact of various sigmoidal activation functions on approximation errors, presented via tables and graphs for one and two-dimensional cases. To demonstrate practical utility, we apply these operators to image scaling, focusing on the “Butterfly” dataset. With fractional parameter <span>\\(\\alpha =2\\)</span>, our operators, activated by a parametric sigmoid function, consistently outperform standard interpolation methods. Significant improvements in Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) are observed at <span>\\({m}=128\\)</span>, highlighting the operators’ efficacy in preserving image quality during upscaling. These results, combining theoretical rigor, computational validation, and practical application to image scaling, showcase the performance advantage of our proposed operators. By integrating fractional calculus and neural network theory, this work advances constructive approximation and image processing.</p>","PeriodicalId":48928,"journal":{"name":"Fractional Calculus and Applied Analysis","volume":"3 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fractional Calculus and Applied Analysis","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s13540-025-00418-0","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

This study introduces a novel family of exponential sampling type neural network Kantorovich operators, leveraging Hadamard fractional integrals to significantly enhance function approximation capabilities. By incorporating a flexible parameter \(\alpha \), derived from fractional Hadamard integrals, and utilizing exponential sampling, introduced to tackle exponentially sampled data, our operators address critical limitations of existing methods, providing substantial improvements in approximation accuracy. We establish fundamental convergence theorems for continuous functions and demonstrate effectiveness in pth Lebesgue integrable spaces. Approximation degrees are quantified using logarithmic moduli of continuity, asymptotic expansions, and Peetre’s K-functional for r-times continuously differentiable functions. A Voronovskaja-type theorem confirms higher-order convergence via linear combinations. Extensions to multivariate cases are proven for convergence in \({L}_{{p}}\)-spaces \((1\le {p}<\infty ).\) MATLAB algorithms and illustrative examples validate theoretical findings, confirming convergence, computational efficiency, and operator consistency. We analyze the impact of various sigmoidal activation functions on approximation errors, presented via tables and graphs for one and two-dimensional cases. To demonstrate practical utility, we apply these operators to image scaling, focusing on the “Butterfly” dataset. With fractional parameter \(\alpha =2\), our operators, activated by a parametric sigmoid function, consistently outperform standard interpolation methods. Significant improvements in Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) are observed at \({m}=128\), highlighting the operators’ efficacy in preserving image quality during upscaling. These results, combining theoretical rigor, computational validation, and practical application to image scaling, showcase the performance advantage of our proposed operators. By integrating fractional calculus and neural network theory, this work advances constructive approximation and image processing.

基于Hadamard分数阶积分的指数抽样型神经网络Kantorovich算子
本文引入了一类新的指数采样型神经网络Kantorovich算子,利用Hadamard分数阶积分显著提高了函数逼近能力。通过结合一个灵活的参数\(\alpha \),从分数Hadamard积分推导,并利用指数采样,引入处理指数采样数据,我们的算子解决了现有方法的关键限制,提供了近似精度的实质性改进。建立了连续函数的基本收敛定理,并证明了连续函数在第p个Lebesgue可积空间中的有效性。近似度是量化使用对数模的连续性,渐近展开式,和彼得的k泛函的r次连续可微函数。voronovskaja型定理证实了线性组合的高阶收敛性。扩展到多元的情况下证明收敛\({L}_{{p}}\) -空间\((1\le {p}<\infty ).\) MATLAB算法和说明性的例子验证理论发现,确认收敛,计算效率和算子的一致性。我们分析了各种s型激活函数对近似误差的影响,并通过表格和图表给出了一维和二维情况。为了展示实际效用,我们将这些算子应用于图像缩放,重点关注“蝴蝶”数据集。对于分数参数\(\alpha =2\),我们的操作符,由参数sigmoid函数激活,始终优于标准插值方法。在\({m}=128\)上观察到结构相似指数测量(SSIM)和峰值信噪比(PSNR)的显著改进,突出了运营商在升级过程中保持图像质量的有效性。这些结果结合了理论严谨性、计算验证和图像缩放的实际应用,展示了我们提出的算子的性能优势。结合分数阶微积分和神经网络理论,提出了构造逼近和图像处理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Fractional Calculus and Applied Analysis
Fractional Calculus and Applied Analysis MATHEMATICS, APPLIED-MATHEMATICS, INTERDISCIPLINARY APPLICATIONS
CiteScore
4.70
自引率
16.70%
发文量
101
期刊介绍: Fractional Calculus and Applied Analysis (FCAA, abbreviated in the World databases as Fract. Calc. Appl. Anal. or FRACT CALC APPL ANAL) is a specialized international journal for theory and applications of an important branch of Mathematical Analysis (Calculus) where differentiations and integrations can be of arbitrary non-integer order. The high standards of its contents are guaranteed by the prominent members of Editorial Board and the expertise of invited external reviewers, and proven by the recently achieved high values of impact factor (JIF) and impact rang (SJR), launching the journal to top places of the ranking lists of Thomson Reuters and Scopus.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信