{"title":"Enhancing AI-Bot Strength and Strategy Diversity in Adversarial Games: A Novel Deep Reinforcement Learning Framework","authors":"Chenglu Sun;Shuo Shen;Deyi Xue;Wenzhi Tao;Zixia Zhou","doi":"10.1109/TG.2024.3520970","DOIUrl":null,"url":null,"abstract":"Deep reinforcement learning (DRL) has emerged as a leading technique for designing AI-bots in the gaming industry. However, practical implementation of DRL-trained bots often encounter two significant challenges: improving strength and diversifying strategies to satisfy player expectations. We observe that the strength of AI-bots are intrinsically tied to the diversity of emerged strategies. Considering this relationship, we introduce diversity is strength (DIS), a novel DRL training framework capable of concurrently training multiple types of AI-bots for adversarial games. These bots are interconnected through an elaborated history model pool (HMP) structure, thereby improving their strength and strategy diversity to tackle the aforementioned challenges. We further devise a model evaluation and sampling scheme to form the HMP, identify superior models, and enrich the model strategies. The DIS can generate diverse and reliable strategies without the need for human data. This method is validated by achieving first-place finishes in two AI competitions based on complex adversarial games, including Google Research Football and Olympic Games. Experiments demonstrate that bots trained using DIS attain an excellent performance and plentiful strategies. Specifically, diversity analysis demonstrates that the trained bots possess a wealth of strategies, and ablation studies confirm the beneficial impact of the designed modules on the training process.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"17 2","pages":"522-535"},"PeriodicalIF":1.7000,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Games","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10812583/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep reinforcement learning (DRL) has emerged as a leading technique for designing AI-bots in the gaming industry. However, practical implementation of DRL-trained bots often encounter two significant challenges: improving strength and diversifying strategies to satisfy player expectations. We observe that the strength of AI-bots are intrinsically tied to the diversity of emerged strategies. Considering this relationship, we introduce diversity is strength (DIS), a novel DRL training framework capable of concurrently training multiple types of AI-bots for adversarial games. These bots are interconnected through an elaborated history model pool (HMP) structure, thereby improving their strength and strategy diversity to tackle the aforementioned challenges. We further devise a model evaluation and sampling scheme to form the HMP, identify superior models, and enrich the model strategies. The DIS can generate diverse and reliable strategies without the need for human data. This method is validated by achieving first-place finishes in two AI competitions based on complex adversarial games, including Google Research Football and Olympic Games. Experiments demonstrate that bots trained using DIS attain an excellent performance and plentiful strategies. Specifically, diversity analysis demonstrates that the trained bots possess a wealth of strategies, and ablation studies confirm the beneficial impact of the designed modules on the training process.