{"title":"Foxtsage vs. Adam:优化的革命还是进化?","authors":"Sirwan Abdolwahed Aula , Tarik Ahmed Rashid","doi":"10.1016/j.cogsys.2025.101373","DOIUrl":null,"url":null,"abstract":"<div><div>Optimisation techniques are crucial in neural network training, influencing predictive performance, convergence efficiency, and computational feasibility. Traditional Optimisers such as Adam offer adaptive learning rates but struggle with convergence stability and hyperparameter sensitivity, whereas SGD provides stability but lacks adaptiveness. We propose Foxtsage, a novel hybrid optimisation algorithm that integrates the FOX-TSA (for global search and exploration) with SGD (for fine-tuned local exploitation) to address these limitations. The proposed Foxtsage Optimiser is benchmarked against the widely used Adam Optimiser across multiple standard datasets, including MNIST, IMDB, and CIFAR-10. Performance is evaluated based on training loss, accuracy, precision, recall, F1-score, and computational time. The study further explores computational complexity and the trade-off between optimisation performance and efficiency. Experimental findings demonstrate that Foxtsage achieves a 42.03% reduction in mean loss (Foxtsage: 9.508, Adam: 16.402) and a 42.19% decrease in loss standard deviation (Foxtsage: 20.86, Adam: 36.085), indicating greater consistency and robustness in optimisation. Additionally, modest improvements are observed in accuracy (0.78%), precision (0.91%), recall (1.02%), and F1-score (0.89%), showcasing better generalization capability. However, these gains come at a significant computational cost, with a 330.87% increase in time mean (Foxtsage: 39.541 sec, Adam: 9.177 sec), raising concerns about practical feasibility in time-sensitive applications. By effectively combining FOX-TSA’s global search power with SGD’s adaptive stability, Foxtsage provides a promising alternative for neural network training. While it enhances performance and robustness, its increased computational overhead presents a critical trade-off. Future work will focus on reducing computational complexity, improving scalability, and exploring its applicability in real-world deep-learning tasks.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101373"},"PeriodicalIF":2.1000,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Foxtsage vs. Adam: Revolution or evolution in optimization?\",\"authors\":\"Sirwan Abdolwahed Aula , Tarik Ahmed Rashid\",\"doi\":\"10.1016/j.cogsys.2025.101373\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Optimisation techniques are crucial in neural network training, influencing predictive performance, convergence efficiency, and computational feasibility. Traditional Optimisers such as Adam offer adaptive learning rates but struggle with convergence stability and hyperparameter sensitivity, whereas SGD provides stability but lacks adaptiveness. We propose Foxtsage, a novel hybrid optimisation algorithm that integrates the FOX-TSA (for global search and exploration) with SGD (for fine-tuned local exploitation) to address these limitations. The proposed Foxtsage Optimiser is benchmarked against the widely used Adam Optimiser across multiple standard datasets, including MNIST, IMDB, and CIFAR-10. Performance is evaluated based on training loss, accuracy, precision, recall, F1-score, and computational time. The study further explores computational complexity and the trade-off between optimisation performance and efficiency. Experimental findings demonstrate that Foxtsage achieves a 42.03% reduction in mean loss (Foxtsage: 9.508, Adam: 16.402) and a 42.19% decrease in loss standard deviation (Foxtsage: 20.86, Adam: 36.085), indicating greater consistency and robustness in optimisation. Additionally, modest improvements are observed in accuracy (0.78%), precision (0.91%), recall (1.02%), and F1-score (0.89%), showcasing better generalization capability. However, these gains come at a significant computational cost, with a 330.87% increase in time mean (Foxtsage: 39.541 sec, Adam: 9.177 sec), raising concerns about practical feasibility in time-sensitive applications. By effectively combining FOX-TSA’s global search power with SGD’s adaptive stability, Foxtsage provides a promising alternative for neural network training. While it enhances performance and robustness, its increased computational overhead presents a critical trade-off. Future work will focus on reducing computational complexity, improving scalability, and exploring its applicability in real-world deep-learning tasks.</div></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":\"92 \",\"pages\":\"Article 101373\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041725000531\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041725000531","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Foxtsage vs. Adam: Revolution or evolution in optimization?
Optimisation techniques are crucial in neural network training, influencing predictive performance, convergence efficiency, and computational feasibility. Traditional Optimisers such as Adam offer adaptive learning rates but struggle with convergence stability and hyperparameter sensitivity, whereas SGD provides stability but lacks adaptiveness. We propose Foxtsage, a novel hybrid optimisation algorithm that integrates the FOX-TSA (for global search and exploration) with SGD (for fine-tuned local exploitation) to address these limitations. The proposed Foxtsage Optimiser is benchmarked against the widely used Adam Optimiser across multiple standard datasets, including MNIST, IMDB, and CIFAR-10. Performance is evaluated based on training loss, accuracy, precision, recall, F1-score, and computational time. The study further explores computational complexity and the trade-off between optimisation performance and efficiency. Experimental findings demonstrate that Foxtsage achieves a 42.03% reduction in mean loss (Foxtsage: 9.508, Adam: 16.402) and a 42.19% decrease in loss standard deviation (Foxtsage: 20.86, Adam: 36.085), indicating greater consistency and robustness in optimisation. Additionally, modest improvements are observed in accuracy (0.78%), precision (0.91%), recall (1.02%), and F1-score (0.89%), showcasing better generalization capability. However, these gains come at a significant computational cost, with a 330.87% increase in time mean (Foxtsage: 39.541 sec, Adam: 9.177 sec), raising concerns about practical feasibility in time-sensitive applications. By effectively combining FOX-TSA’s global search power with SGD’s adaptive stability, Foxtsage provides a promising alternative for neural network training. While it enhances performance and robustness, its increased computational overhead presents a critical trade-off. Future work will focus on reducing computational complexity, improving scalability, and exploring its applicability in real-world deep-learning tasks.
期刊介绍:
Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial.
The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition.
Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.