A hyper-heuristic enhanced neuro-evolutionary algorithm with self-adaptive operators and various activation functions for classification problems

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Fehmi Burcin Ozsoydan , İlker Gölcük , Esra Duygu Durmaz
{"title":"A hyper-heuristic enhanced neuro-evolutionary algorithm with self-adaptive operators and various activation functions for classification problems","authors":"Fehmi Burcin Ozsoydan ,&nbsp;İlker Gölcük ,&nbsp;Esra Duygu Durmaz","doi":"10.1016/j.neunet.2025.107751","DOIUrl":null,"url":null,"abstract":"<div><div>Due to their remarkable generalization capabilities, Artificial Neural Networks (ANNs) grab attention of researchers and practitioners. ANNs have two main stages, namely training and testing. The training stage aims to find optimum synapse values. Traditional gradient-descent-based approaches offer notable advantages in training ANNs, nevertheless, they are exposed to some limitations such as convergence and local minima issues. Therefore, stochastic search algorithms are commonly employed. In this regard, the present study adopts and further extends the evolutionary search strategies namely, the Differential Evolution (DE) algorithm and the Self-Adaptive Differential Evolution (SaDE) Algorithm in training ANNs. Accordingly, self-adaptation procedures are modified to perform a more convenient search and to avoid local optima first. Secondarily, mutation operations in these algorithms are reinforced by a hyper-heuristic framework, which selects low-level heuristics out of a heuristics pool, based on their achievements throughout search. Thus, to achieve better performance, while promising mechanisms are more frequently invoked by the proposed approach, naïve operators are less frequently invoked. This implicitly avoids greedy behavior in selecting low-level heuristics and attempts to overcome loss-of-diversity and local optima issues. Moreover, due to possible complexity and nonlinearity in mapping between inputs and outputs, the proposed method is also tested by using various activation functions. Thus, a hyper-heuristic enhanced neuro-evolutionary algorithm with self-adaptive operators is introduced. Finally, the performances of all used methods with various activation functions are tested on the well-known classification problems. Statistically verified computational results point out significant differences among the algorithms and used activation functions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107751"},"PeriodicalIF":6.3000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025006318","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Due to their remarkable generalization capabilities, Artificial Neural Networks (ANNs) grab attention of researchers and practitioners. ANNs have two main stages, namely training and testing. The training stage aims to find optimum synapse values. Traditional gradient-descent-based approaches offer notable advantages in training ANNs, nevertheless, they are exposed to some limitations such as convergence and local minima issues. Therefore, stochastic search algorithms are commonly employed. In this regard, the present study adopts and further extends the evolutionary search strategies namely, the Differential Evolution (DE) algorithm and the Self-Adaptive Differential Evolution (SaDE) Algorithm in training ANNs. Accordingly, self-adaptation procedures are modified to perform a more convenient search and to avoid local optima first. Secondarily, mutation operations in these algorithms are reinforced by a hyper-heuristic framework, which selects low-level heuristics out of a heuristics pool, based on their achievements throughout search. Thus, to achieve better performance, while promising mechanisms are more frequently invoked by the proposed approach, naïve operators are less frequently invoked. This implicitly avoids greedy behavior in selecting low-level heuristics and attempts to overcome loss-of-diversity and local optima issues. Moreover, due to possible complexity and nonlinearity in mapping between inputs and outputs, the proposed method is also tested by using various activation functions. Thus, a hyper-heuristic enhanced neuro-evolutionary algorithm with self-adaptive operators is introduced. Finally, the performances of all used methods with various activation functions are tested on the well-known classification problems. Statistically verified computational results point out significant differences among the algorithms and used activation functions.
一种具有自适应算子和多种激活函数的超启发式增强神经进化算法
人工神经网络(Artificial Neural Networks, ann)由于其卓越的泛化能力,受到了研究者和实践者的广泛关注。人工神经网络有两个主要阶段,即训练和测试。训练阶段的目的是找到最佳的突触值。传统的基于梯度下降的方法在训练人工神经网络方面具有显著的优势,但也存在一些局限性,如收敛性和局部极小问题。因此,通常采用随机搜索算法。在这方面,本研究采用并进一步扩展了进化搜索策略,即差分进化(DE)算法和自适应差分进化(SaDE)算法来训练人工神经网络。因此,对自适应程序进行修改,以执行更方便的搜索并首先避免局部最优。其次,这些算法中的突变操作通过一个超启发式框架得到加强,该框架根据它们在整个搜索过程中的成就,从启发式池中选择低级启发式。因此,为了获得更好的性能,虽然建议的方法更频繁地调用有前途的机制,但较少调用naïve操作符。这隐含地避免了在选择低级启发式时的贪婪行为,并试图克服多样性损失和局部最优问题。此外,由于输入和输出之间的映射可能存在复杂性和非线性,因此还使用各种激活函数对所提出的方法进行了测试。为此,提出了一种具有自适应算子的超启发式增强神经进化算法。最后,在已知的分类问题上,用不同的激活函数测试了所有方法的性能。经过统计验证的计算结果指出了算法与所用激活函数之间的显著差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信