Vinesha Peiris, Vera Roshchina, Nadezda Sukhorukova
{"title":"基于统一规范损失函数的人工神经网络","authors":"Vinesha Peiris, Vera Roshchina, Nadezda Sukhorukova","doi":"10.1007/s10444-024-10124-9","DOIUrl":null,"url":null,"abstract":"<div><p>We explore the potential for using a nonsmooth loss function based on the max-norm in the training of an artificial neural network without hidden layers. We hypothesise that this may lead to superior classification results in some special cases where the training data are either very small or the class size is disproportional. Our numerical experiments performed on a simple artificial neural network with no hidden layer appear to confirm our hypothesis.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10444-024-10124-9.pdf","citationCount":"0","resultStr":"{\"title\":\"Artificial neural networks with uniform norm-based loss functions\",\"authors\":\"Vinesha Peiris, Vera Roshchina, Nadezda Sukhorukova\",\"doi\":\"10.1007/s10444-024-10124-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We explore the potential for using a nonsmooth loss function based on the max-norm in the training of an artificial neural network without hidden layers. We hypothesise that this may lead to superior classification results in some special cases where the training data are either very small or the class size is disproportional. Our numerical experiments performed on a simple artificial neural network with no hidden layer appear to confirm our hypothesis.</p></div>\",\"PeriodicalId\":50869,\"journal\":{\"name\":\"Advances in Computational Mathematics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10444-024-10124-9.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Computational Mathematics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10444-024-10124-9\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Computational Mathematics","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1007/s10444-024-10124-9","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Artificial neural networks with uniform norm-based loss functions
We explore the potential for using a nonsmooth loss function based on the max-norm in the training of an artificial neural network without hidden layers. We hypothesise that this may lead to superior classification results in some special cases where the training data are either very small or the class size is disproportional. Our numerical experiments performed on a simple artificial neural network with no hidden layer appear to confirm our hypothesis.
期刊介绍:
Advances in Computational Mathematics publishes high quality, accessible and original articles at the forefront of computational and applied mathematics, with a clear potential for impact across the sciences. The journal emphasizes three core areas: approximation theory and computational geometry; numerical analysis, modelling and simulation; imaging, signal processing and data analysis.
This journal welcomes papers that are accessible to a broad audience in the mathematical sciences and that show either an advance in computational methodology or a novel scientific application area, or both. Methods papers should rely on rigorous analysis and/or convincing numerical studies.