{"title":"学习理论中分数阶Tikhonov正则化方案的一类参数选择规则","authors":"Sreepriya P., Denny K.D., G.D. Reddy","doi":"10.1016/j.amc.2025.129447","DOIUrl":null,"url":null,"abstract":"<div><div>Klann and Ramlau <span><span>[16]</span></span> hypothesized fractional Tikhonov regularization as an interpolation between generalized inverse and Tikhonov regularization. In fact, fractional schemes can be viewed as a generalization of the Tikhonov scheme. One of the motives of this work is the major pitfall of the a priori parameter choice rule, which primarily relies on source conditions that are often unknown. It necessitates the need for advocating a data-driven approach (a posteriori choice strategy). We briefly overview fractional scheme in learning theory and propose a modified Engl type <span><span>[9]</span></span> discrepancy principle, thus integrating supervised learning into the field of inverse problems. In due course of the investigation, we effectively explored the relation between learning from examples and the inverse problems. We demonstrate the regularization properties and establish the convergence rate of this scheme. Finally, the theoretical results are corroborated using two well known examples in learning theory.</div></div>","PeriodicalId":55496,"journal":{"name":"Applied Mathematics and Computation","volume":"500 ","pages":"Article 129447"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A class of parameter choice rules for fractional Tikhonov regularization scheme in learning theory\",\"authors\":\"Sreepriya P., Denny K.D., G.D. Reddy\",\"doi\":\"10.1016/j.amc.2025.129447\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Klann and Ramlau <span><span>[16]</span></span> hypothesized fractional Tikhonov regularization as an interpolation between generalized inverse and Tikhonov regularization. In fact, fractional schemes can be viewed as a generalization of the Tikhonov scheme. One of the motives of this work is the major pitfall of the a priori parameter choice rule, which primarily relies on source conditions that are often unknown. It necessitates the need for advocating a data-driven approach (a posteriori choice strategy). We briefly overview fractional scheme in learning theory and propose a modified Engl type <span><span>[9]</span></span> discrepancy principle, thus integrating supervised learning into the field of inverse problems. In due course of the investigation, we effectively explored the relation between learning from examples and the inverse problems. We demonstrate the regularization properties and establish the convergence rate of this scheme. Finally, the theoretical results are corroborated using two well known examples in learning theory.</div></div>\",\"PeriodicalId\":55496,\"journal\":{\"name\":\"Applied Mathematics and Computation\",\"volume\":\"500 \",\"pages\":\"Article 129447\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-04-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Computation\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0096300325001742\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Computation","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0096300325001742","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
A class of parameter choice rules for fractional Tikhonov regularization scheme in learning theory
Klann and Ramlau [16] hypothesized fractional Tikhonov regularization as an interpolation between generalized inverse and Tikhonov regularization. In fact, fractional schemes can be viewed as a generalization of the Tikhonov scheme. One of the motives of this work is the major pitfall of the a priori parameter choice rule, which primarily relies on source conditions that are often unknown. It necessitates the need for advocating a data-driven approach (a posteriori choice strategy). We briefly overview fractional scheme in learning theory and propose a modified Engl type [9] discrepancy principle, thus integrating supervised learning into the field of inverse problems. In due course of the investigation, we effectively explored the relation between learning from examples and the inverse problems. We demonstrate the regularization properties and establish the convergence rate of this scheme. Finally, the theoretical results are corroborated using two well known examples in learning theory.
期刊介绍:
Applied Mathematics and Computation addresses work at the interface between applied mathematics, numerical computation, and applications of systems – oriented ideas to the physical, biological, social, and behavioral sciences, and emphasizes papers of a computational nature focusing on new algorithms, their analysis and numerical results.
In addition to presenting research papers, Applied Mathematics and Computation publishes review articles and single–topics issues.