现代深度神经网络高效收敛结构中激活函数的大规模研究

IF 3.4 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Andrinandrasana David Rasamoelina, Ivan Cík, Peter Sincak, Marián Mach, Lukás Hruska
{"title":"现代深度神经网络高效收敛结构中激活函数的大规模研究","authors":"Andrinandrasana David Rasamoelina, Ivan Cík, Peter Sincak, Marián Mach, Lukás Hruska","doi":"10.4114/intartif.vol25iss70pp95-109","DOIUrl":null,"url":null,"abstract":"Activation functions play an important role in the convergence of learning algorithms based on neural networks. Theyprovide neural networks with nonlinear ability and the possibility to fit in any complex data. However, no deep study exists in theliterature on the comportment of activation functions in modern architecture. Therefore, in this research, we compare the 18 most used activation functions on multiple datasets (CIFAR-10, CIFAR-100, CALTECH-256) using 4 different models (EfficientNet,ResNet, a variation of ResNet using the bag of tricks, and MobileNet V3). Furthermore, we explore the shape of the losslandscape of those different architectures with various activation functions. Lastly, based on the result of our experimentation,we introduce a new locally quadratic activation function namely Hytana alongside one variation Parametric Hytana whichoutperforms common activation functions and address the dying ReLU problem.","PeriodicalId":43470,"journal":{"name":"Inteligencia Artificial-Iberoamerical Journal of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.4000,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Large-Scale Study of Activation Functions in Modern Deep Neural Network Architectures for Efficient Convergence\",\"authors\":\"Andrinandrasana David Rasamoelina, Ivan Cík, Peter Sincak, Marián Mach, Lukás Hruska\",\"doi\":\"10.4114/intartif.vol25iss70pp95-109\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Activation functions play an important role in the convergence of learning algorithms based on neural networks. Theyprovide neural networks with nonlinear ability and the possibility to fit in any complex data. However, no deep study exists in theliterature on the comportment of activation functions in modern architecture. Therefore, in this research, we compare the 18 most used activation functions on multiple datasets (CIFAR-10, CIFAR-100, CALTECH-256) using 4 different models (EfficientNet,ResNet, a variation of ResNet using the bag of tricks, and MobileNet V3). Furthermore, we explore the shape of the losslandscape of those different architectures with various activation functions. Lastly, based on the result of our experimentation,we introduce a new locally quadratic activation function namely Hytana alongside one variation Parametric Hytana whichoutperforms common activation functions and address the dying ReLU problem.\",\"PeriodicalId\":43470,\"journal\":{\"name\":\"Inteligencia Artificial-Iberoamerical Journal of Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2022-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Inteligencia Artificial-Iberoamerical Journal of Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4114/intartif.vol25iss70pp95-109\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inteligencia Artificial-Iberoamerical Journal of Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4114/intartif.vol25iss70pp95-109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

摘要

激活函数在神经网络学习算法的收敛性中起着重要的作用。它们为神经网络提供了非线性能力和拟合任何复杂数据的可能性。然而,关于激活功能在现代建筑中的行为,文献中还没有深入的研究。因此,在本研究中,我们使用4种不同的模型(EfficientNet,ResNet,使用技巧包的ResNet变体,和MobileNet V3)比较了多个数据集(CIFAR-10, CIFAR-100, CALTECH-256)上18种最常用的激活函数。此外,我们还探索了具有不同激活功能的不同建筑的lossllandscape形状。最后,基于我们的实验结果,我们引入了一个新的局部二次激活函数Hytana和一个变量参数Hytana,它优于常见的激活函数,并解决了ReLU的死亡问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Large-Scale Study of Activation Functions in Modern Deep Neural Network Architectures for Efficient Convergence
Activation functions play an important role in the convergence of learning algorithms based on neural networks. Theyprovide neural networks with nonlinear ability and the possibility to fit in any complex data. However, no deep study exists in theliterature on the comportment of activation functions in modern architecture. Therefore, in this research, we compare the 18 most used activation functions on multiple datasets (CIFAR-10, CIFAR-100, CALTECH-256) using 4 different models (EfficientNet,ResNet, a variation of ResNet using the bag of tricks, and MobileNet V3). Furthermore, we explore the shape of the losslandscape of those different architectures with various activation functions. Lastly, based on the result of our experimentation,we introduce a new locally quadratic activation function namely Hytana alongside one variation Parametric Hytana whichoutperforms common activation functions and address the dying ReLU problem.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.00
自引率
0.00%
发文量
15
审稿时长
8 weeks
期刊介绍: Inteligencia Artificial is a quarterly journal promoted and sponsored by the Spanish Association for Artificial Intelligence. The journal publishes high-quality original research papers reporting theoretical or applied advances in all branches of Artificial Intelligence. The journal publishes high-quality original research papers reporting theoretical or applied advances in all branches of Artificial Intelligence. Particularly, the Journal welcomes: New approaches, techniques or methods to solve AI problems, which should include demonstrations of effectiveness oor improvement over existing methods. These demonstrations must be reproducible. Integration of different technologies or approaches to solve wide problems or belonging different areas. AI applications, which should describe in detail the problem or the scenario and the proposed solution, emphasizing its novelty and present a evaluation of the AI techniques that are applied. In addition to rapid publication and dissemination of unsolicited contributions, the journal is also committed to producing monographs, surveys or special issues on topics, methods or techniques of special relevance to the AI community. Inteligencia Artificial welcomes submissions written in English, Spaninsh or Portuguese. But at least, a title, summary and keywords in english should be included in each contribution.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信