{"title":"Estimates on learning rates for multi-penalty distribution regression","authors":"Zhan Yu , Daniel W.C. Ho","doi":"10.1016/j.acha.2023.101609","DOIUrl":null,"url":null,"abstract":"<div><p><span><span>This paper is concerned with functional learning by utilizing two-stage sampled distribution regression. We study a multi-penalty regularization algorithm for distribution regression in the framework of learning theory. The algorithm aims at regressing to real-valued outputs from probability measures. The theoretical analysis of distribution regression is far from maturity and quite challenging since only second-stage samples are observable in practical settings. In our algorithm, to transform information of distribution samples, we embed the distributions to a reproducing kernel </span>Hilbert space </span><span><math><msub><mrow><mi>H</mi></mrow><mrow><mi>K</mi></mrow></msub></math></span> associated with Mercer kernel <em>K</em> via mean embedding technique. One of the primary contributions of this work is the introduction of a novel multi-penalty regularization algorithm, which is able to capture more potential features of distribution regression. Optimal learning rates of the algorithm are obtained under mild conditions. The work also derives learning rates for distribution regression in the hard learning scenario <span><math><msub><mrow><mi>f</mi></mrow><mrow><mi>ρ</mi></mrow></msub><mo>∉</mo><msub><mrow><mi>H</mi></mrow><mrow><mi>K</mi></mrow></msub></math></span>, which has not been explored in the existing literature. Moreover, we propose a new distribution-regression-based distributed learning algorithm to face large-scale data or information challenges arising from distribution data. The optimal learning rates are derived for the distributed learning algorithm. By providing new algorithms and showing their learning rates, the work improves the existing literature in various aspects.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"69 ","pages":"Article 101609"},"PeriodicalIF":2.6000,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied and Computational Harmonic Analysis","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1063520323000969","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
This paper is concerned with functional learning by utilizing two-stage sampled distribution regression. We study a multi-penalty regularization algorithm for distribution regression in the framework of learning theory. The algorithm aims at regressing to real-valued outputs from probability measures. The theoretical analysis of distribution regression is far from maturity and quite challenging since only second-stage samples are observable in practical settings. In our algorithm, to transform information of distribution samples, we embed the distributions to a reproducing kernel Hilbert space associated with Mercer kernel K via mean embedding technique. One of the primary contributions of this work is the introduction of a novel multi-penalty regularization algorithm, which is able to capture more potential features of distribution regression. Optimal learning rates of the algorithm are obtained under mild conditions. The work also derives learning rates for distribution regression in the hard learning scenario , which has not been explored in the existing literature. Moreover, we propose a new distribution-regression-based distributed learning algorithm to face large-scale data or information challenges arising from distribution data. The optimal learning rates are derived for the distributed learning algorithm. By providing new algorithms and showing their learning rates, the work improves the existing literature in various aspects.
期刊介绍:
Applied and Computational Harmonic Analysis (ACHA) is an interdisciplinary journal that publishes high-quality papers in all areas of mathematical sciences related to the applied and computational aspects of harmonic analysis, with special emphasis on innovative theoretical development, methods, and algorithms, for information processing, manipulation, understanding, and so forth. The objectives of the journal are to chronicle the important publications in the rapidly growing field of data representation and analysis, to stimulate research in relevant interdisciplinary areas, and to provide a common link among mathematical, physical, and life scientists, as well as engineers.