Nicoletta Del Buono , Flavia Esposito , Laura Selicato , Rafał Zdunek
{"title":"非负低秩逼近的带有分集度量的惩罚超参数优化","authors":"Nicoletta Del Buono , Flavia Esposito , Laura Selicato , Rafał Zdunek","doi":"10.1016/j.apnum.2024.10.002","DOIUrl":null,"url":null,"abstract":"<div><div>Learning tasks are often based on penalized optimization problems in which a sparse solution is desired. This can lead to more interpretative results by identifying a smaller subset of important features or components and reducing the dimensionality of the data representation, as well. In this study, we propose a new method to solve a constrained Frobenius norm-based nonnegative low-rank approximation, and the tuning of the associated penalty hyperparameter, simultaneously. The penalty term added is a particular diversity measure that is more effective for sparseness purposes than other classical norm-based penalties (i.e., <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> or <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn><mo>,</mo><mn>1</mn></mrow></msub></math></span> norms). As it is well known, setting the hyperparameters of an algorithm is not an easy task. Our work drew on developing an optimization method and the corresponding algorithm that simultaneously solves the sparsity-constrained nonnegative approximation problem and optimizes its associated penalty hyperparameters. We test the proposed method by numerical experiments and show its promising results on several synthetic and real datasets.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 189-204"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Penalty hyperparameter optimization with diversity measure for nonnegative low-rank approximation\",\"authors\":\"Nicoletta Del Buono , Flavia Esposito , Laura Selicato , Rafał Zdunek\",\"doi\":\"10.1016/j.apnum.2024.10.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Learning tasks are often based on penalized optimization problems in which a sparse solution is desired. This can lead to more interpretative results by identifying a smaller subset of important features or components and reducing the dimensionality of the data representation, as well. In this study, we propose a new method to solve a constrained Frobenius norm-based nonnegative low-rank approximation, and the tuning of the associated penalty hyperparameter, simultaneously. The penalty term added is a particular diversity measure that is more effective for sparseness purposes than other classical norm-based penalties (i.e., <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> or <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn><mo>,</mo><mn>1</mn></mrow></msub></math></span> norms). As it is well known, setting the hyperparameters of an algorithm is not an easy task. Our work drew on developing an optimization method and the corresponding algorithm that simultaneously solves the sparsity-constrained nonnegative approximation problem and optimizes its associated penalty hyperparameters. We test the proposed method by numerical experiments and show its promising results on several synthetic and real datasets.</div></div>\",\"PeriodicalId\":8199,\"journal\":{\"name\":\"Applied Numerical Mathematics\",\"volume\":\"208 \",\"pages\":\"Pages 189-204\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Numerical Mathematics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0168927424002708\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Numerical Mathematics","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168927424002708","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Penalty hyperparameter optimization with diversity measure for nonnegative low-rank approximation
Learning tasks are often based on penalized optimization problems in which a sparse solution is desired. This can lead to more interpretative results by identifying a smaller subset of important features or components and reducing the dimensionality of the data representation, as well. In this study, we propose a new method to solve a constrained Frobenius norm-based nonnegative low-rank approximation, and the tuning of the associated penalty hyperparameter, simultaneously. The penalty term added is a particular diversity measure that is more effective for sparseness purposes than other classical norm-based penalties (i.e., or norms). As it is well known, setting the hyperparameters of an algorithm is not an easy task. Our work drew on developing an optimization method and the corresponding algorithm that simultaneously solves the sparsity-constrained nonnegative approximation problem and optimizes its associated penalty hyperparameters. We test the proposed method by numerical experiments and show its promising results on several synthetic and real datasets.
期刊介绍:
The purpose of the journal is to provide a forum for the publication of high quality research and tutorial papers in computational mathematics. In addition to the traditional issues and problems in numerical analysis, the journal also publishes papers describing relevant applications in such fields as physics, fluid dynamics, engineering and other branches of applied science with a computational mathematics component. The journal strives to be flexible in the type of papers it publishes and their format. Equally desirable are:
(i) Full papers, which should be complete and relatively self-contained original contributions with an introduction that can be understood by the broad computational mathematics community. Both rigorous and heuristic styles are acceptable. Of particular interest are papers about new areas of research, in which other than strictly mathematical arguments may be important in establishing a basis for further developments.
(ii) Tutorial review papers, covering some of the important issues in Numerical Mathematics, Scientific Computing and their Applications. The journal will occasionally publish contributions which are larger than the usual format for regular papers.
(iii) Short notes, which present specific new results and techniques in a brief communication.