Continuous Action Learning Automata Optimizer for training Artificial Neural Networks

J. Lindsay, S. Givigi
{"title":"Continuous Action Learning Automata Optimizer for training Artificial Neural Networks","authors":"J. Lindsay, S. Givigi","doi":"10.1109/SysCon53073.2023.10131086","DOIUrl":null,"url":null,"abstract":"This paper introduces a Continuous-Action Learning Automata (CALA) game optimizer that provides a generalized way to use a game of CALA agents to train Artificial Neural Networks (ANNs) and Deep ANNs. This method uses both game theory and learning automata, which makes it a computationally efficient method when compared against other non-gradient and non-back propagation methods. Since the CALA game optimizer does not use gradients or back propagation, issues such as the vanishing gradient problem do not manifest, which allows for the use of multiple activation functions such as sigmoid or tanh even in a Deep ANN.","PeriodicalId":169296,"journal":{"name":"2023 IEEE International Systems Conference (SysCon)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Systems Conference (SysCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SysCon53073.2023.10131086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper introduces a Continuous-Action Learning Automata (CALA) game optimizer that provides a generalized way to use a game of CALA agents to train Artificial Neural Networks (ANNs) and Deep ANNs. This method uses both game theory and learning automata, which makes it a computationally efficient method when compared against other non-gradient and non-back propagation methods. Since the CALA game optimizer does not use gradients or back propagation, issues such as the vanishing gradient problem do not manifest, which allows for the use of multiple activation functions such as sigmoid or tanh even in a Deep ANN.
用于训练人工神经网络的连续动作学习自动机优化器
本文介绍了一种连续动作学习自动机(CALA)游戏优化器,该优化器提供了一种使用CALA代理的游戏来训练人工神经网络(ann)和深度人工神经网络的通用方法。该方法同时使用了博弈论和学习自动机,与其他非梯度和非反向传播方法相比,它是一种计算效率很高的方法。由于CALA游戏优化器不使用梯度或反向传播,因此诸如梯度消失问题之类的问题不会出现,这允许使用多个激活函数,例如sigmoid或tanh,即使在Deep ANN中也是如此。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信