正则化强度对神经网络集成的影响

Cedrique Rovile Njieutcheu Tassi, A. Börner, Rudolph Triebel
{"title":"正则化强度对神经网络集成的影响","authors":"Cedrique Rovile Njieutcheu Tassi, A. Börner, Rudolph Triebel","doi":"10.1145/3579654.3579661","DOIUrl":null,"url":null,"abstract":"In the last decade, several approaches have been proposed for regularizing deeper and wider neural networks (NNs), which is of importance in areas like image classification. It is now common practice to incorporate several regularization approaches in the training procedure of NNs. However, the impact of regularization strength on the properties of an ensemble of NNs remains unclear. For this reason, the study empirically compared the impact of NNs built based on two different regularization strengths (weak regularization (WR) and strong regularization (SR)) on the properties of an ensemble, such as the magnitude of logits, classification accuracy, calibration error, and ability to separate true predictions (TPs) and false predictions (FPs). The comparison was based on results from different experiments conducted on three different models, datasets, and architectures. Experimental results show that the increase in regularization strength 1) reduces the magnitude of logits; 2) can increase or decrease the classification accuracy depending on the dataset and/or architecture; 3) increases the calibration error; and 4) can improve or harm the separability between TPs and FPs depending on the dataset, architecture, model type and/or FP type.","PeriodicalId":146783,"journal":{"name":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Regularization Strength Impact on Neural Network Ensembles\",\"authors\":\"Cedrique Rovile Njieutcheu Tassi, A. Börner, Rudolph Triebel\",\"doi\":\"10.1145/3579654.3579661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the last decade, several approaches have been proposed for regularizing deeper and wider neural networks (NNs), which is of importance in areas like image classification. It is now common practice to incorporate several regularization approaches in the training procedure of NNs. However, the impact of regularization strength on the properties of an ensemble of NNs remains unclear. For this reason, the study empirically compared the impact of NNs built based on two different regularization strengths (weak regularization (WR) and strong regularization (SR)) on the properties of an ensemble, such as the magnitude of logits, classification accuracy, calibration error, and ability to separate true predictions (TPs) and false predictions (FPs). The comparison was based on results from different experiments conducted on three different models, datasets, and architectures. Experimental results show that the increase in regularization strength 1) reduces the magnitude of logits; 2) can increase or decrease the classification accuracy depending on the dataset and/or architecture; 3) increases the calibration error; and 4) can improve or harm the separability between TPs and FPs depending on the dataset, architecture, model type and/or FP type.\",\"PeriodicalId\":146783,\"journal\":{\"name\":\"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3579654.3579661\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579654.3579661","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在过去的十年中,人们提出了几种方法来正则化更深度和更广泛的神经网络(nn),这在图像分类等领域非常重要。现在在神经网络的训练过程中结合几种正则化方法是一种常见的做法。然而,正则化强度对神经网络集合性质的影响尚不清楚。因此,该研究通过经验比较了基于两种不同正则化强度(弱正则化(WR)和强正则化(SR))构建的神经网络对集成属性的影响,如logits的大小、分类精度、校准误差以及分离真预测(tp)和假预测(FPs)的能力。比较是基于在三种不同的模型、数据集和架构上进行的不同实验的结果。实验结果表明,正则化强度的增加(1)降低了logits的大小;2)可以根据数据集和/或体系结构提高或降低分类精度;3)增加了校准误差;4)根据数据集、架构、模型类型和/或FP类型,可以改善或损害tp和FP之间的可分离性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Regularization Strength Impact on Neural Network Ensembles
In the last decade, several approaches have been proposed for regularizing deeper and wider neural networks (NNs), which is of importance in areas like image classification. It is now common practice to incorporate several regularization approaches in the training procedure of NNs. However, the impact of regularization strength on the properties of an ensemble of NNs remains unclear. For this reason, the study empirically compared the impact of NNs built based on two different regularization strengths (weak regularization (WR) and strong regularization (SR)) on the properties of an ensemble, such as the magnitude of logits, classification accuracy, calibration error, and ability to separate true predictions (TPs) and false predictions (FPs). The comparison was based on results from different experiments conducted on three different models, datasets, and architectures. Experimental results show that the increase in regularization strength 1) reduces the magnitude of logits; 2) can increase or decrease the classification accuracy depending on the dataset and/or architecture; 3) increases the calibration error; and 4) can improve or harm the separability between TPs and FPs depending on the dataset, architecture, model type and/or FP type.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信