Optimization Strategies for Atari Game Environments: Integrating Snake Optimization Algorithm and Energy Valley Optimization in Reinforcement Learning Models

AI Pub Date : 2024-07-17 DOI:10.3390/ai5030057
Sadeq Mohammed Kadhm Sarkhi, Hakan Koyuncu
{"title":"Optimization Strategies for Atari Game Environments: Integrating Snake Optimization Algorithm and Energy Valley Optimization in Reinforcement Learning Models","authors":"Sadeq Mohammed Kadhm Sarkhi, Hakan Koyuncu","doi":"10.3390/ai5030057","DOIUrl":null,"url":null,"abstract":"One of the biggest problems in gaming AI is related to how we can optimize and adapt a deep reinforcement learning (DRL) model, especially when it is running inside complex, dynamic environments like “PacMan”. The existing research has concentrated more or less on basic DRL approaches though the utilization of advanced optimization methods. This paper tries to fill these gaps by proposing an innovative methodology that combines DRL with high-level metaheuristic optimization methods. The work presented in this paper specifically refactors DRL models on the “PacMan” domain with Energy Serpent Optimizer (ESO) for hyperparameter search. These novel adaptations give a major performance boost to the AI agent, as these are where its adaptability, response time, and efficiency gains start actually showing in the more complex game space. This work innovatively incorporates the metaheuristic optimization algorithm into another field—DRL—for Atari gaming AI. This integration is essential for the improvement of DRL models in general and allows for more efficient and real-time game play. This work delivers a comprehensive empirical study for these algorithms that not only verifies their capabilities in practice but also sets a state of the art through the prism of AI-driven game development. More than simply improving gaming AI, the developments could eventually apply to more sophisticated gaming environments, ongoing improvement of algorithms during execution, real-time adaptation regarding learning, and likely even robotics/autonomous systems. This study further illustrates the necessity for even-handed and conscientious application of AI in gaming—specifically regarding questions of fairness and addiction.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/ai5030057","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

One of the biggest problems in gaming AI is related to how we can optimize and adapt a deep reinforcement learning (DRL) model, especially when it is running inside complex, dynamic environments like “PacMan”. The existing research has concentrated more or less on basic DRL approaches though the utilization of advanced optimization methods. This paper tries to fill these gaps by proposing an innovative methodology that combines DRL with high-level metaheuristic optimization methods. The work presented in this paper specifically refactors DRL models on the “PacMan” domain with Energy Serpent Optimizer (ESO) for hyperparameter search. These novel adaptations give a major performance boost to the AI agent, as these are where its adaptability, response time, and efficiency gains start actually showing in the more complex game space. This work innovatively incorporates the metaheuristic optimization algorithm into another field—DRL—for Atari gaming AI. This integration is essential for the improvement of DRL models in general and allows for more efficient and real-time game play. This work delivers a comprehensive empirical study for these algorithms that not only verifies their capabilities in practice but also sets a state of the art through the prism of AI-driven game development. More than simply improving gaming AI, the developments could eventually apply to more sophisticated gaming environments, ongoing improvement of algorithms during execution, real-time adaptation regarding learning, and likely even robotics/autonomous systems. This study further illustrates the necessity for even-handed and conscientious application of AI in gaming—specifically regarding questions of fairness and addiction.
雅达利游戏环境的优化策略:强化学习模型中的蛇优化算法与能量谷优化相结合
游戏人工智能领域最大的问题之一是如何优化和调整深度强化学习(DRL)模型,尤其是当它在像 "吃豆人 "这样复杂的动态环境中运行时。现有的研究或多或少都集中在基本的 DRL 方法上,而没有利用先进的优化方法。本文试图通过提出一种将 DRL 与高级元启发式优化方法相结合的创新方法来填补这些空白。本文所介绍的工作特别针对 "PacMan "领域的 DRL 模型进行了重构,并将能量蛇优化器(ESO)用于超参数搜索。这些新颖的适应性极大地提升了人工智能代理的性能,因为在更为复杂的游戏空间中,人工智能代理的适应性、响应时间和效率的提升开始真正显现出来。这项工作创新性地将元启发式优化算法融入到 Atari 游戏人工智能的另一个领域--DRL。这种整合对于改进一般 DRL 模型至关重要,可实现更高效、更实时的游戏。这项工作对这些算法进行了全面的实证研究,不仅验证了它们在实践中的能力,而且还通过人工智能驱动的游戏开发棱镜设定了技术状态。这些开发成果不仅能改善游戏人工智能,最终还能应用于更复杂的游戏环境、执行过程中算法的不断改进、学习方面的实时适应,甚至机器人/自主系统。这项研究进一步说明,在游戏中应用人工智能时,特别是在公平性和成瘾性问题上,必须做到公平公正、认真负责。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
AI
AI
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信