{"title":"Neural Network-Based Information Set Weighting for Playing Reconnaissance Blind Chess","authors":"Timo Bertram;Johannes Fürnkranz;Martin Müller","doi":"10.1109/TG.2024.3425803","DOIUrl":"10.1109/TG.2024.3425803","url":null,"abstract":"In imperfect information games, the game state is generally not fully observable to players. Therefore, good gameplay requires policies that deal with the different information that is hidden from each player. To combat this, effective algorithms often reason about information sets; the sets of all possible game states that are consistent with a player's observations. While there is no way to distinguish between the states within an information set, this property does not imply that all states are equally likely to occur in play. We extend previous research on assigning weights to the states in an information set in order to facilitate better gameplay in the imperfect information game of reconnaissance blind chess (RBC). For this, we train two different neural networks, which estimate the likelihood of each state in an information set from historical game data. Experimentally, we find that a Siamese neural network is able to achieve higher accuracy and is more efficient than a classical convolutional neural network for the given domain. Finally, we evaluate an RBC-playing agent that is based on the generated weightings and compare different parameter settings that influence how strongly it should rely on them. The resulting best player is ranked 5th on the public leaderboard.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 4","pages":"960-970"},"PeriodicalIF":1.7,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10592629","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141588601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christina Volioti, Vasileios Martsis, Apostolos Ampatzoglou, Euclid Keramopoulos, Alexander Chatzigeorgiou
{"title":"Codeless3D: Design and Usability Evaluation of a Low-Code Tool for 3D Game Generation","authors":"Christina Volioti, Vasileios Martsis, Apostolos Ampatzoglou, Euclid Keramopoulos, Alexander Chatzigeorgiou","doi":"10.1109/tg.2024.3424894","DOIUrl":"https://doi.org/10.1109/tg.2024.3424894","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"38 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bidding Efficiently in Simultaneous Ascending Auctions With Budget and Eligibility Constraints Using Simultaneous Move Monte Carlo Tree Search","authors":"Alexandre Pacaud;Aurelien Bechler;Marceau Coupechoux","doi":"10.1109/TG.2024.3424246","DOIUrl":"10.1109/TG.2024.3424246","url":null,"abstract":"For decades, simultaneous ascending auction (SAA) has been the most popular mechanism used for spectrum auctions. It has recently been employed by many countries for the allocation of 5G licences. Although SAA presents relatively simple rules, it induces a complex strategic game for which the optimal bidding strategy is unknown. Considering the fact that sometimes billions of euros are at stake in an SAA, establishing an efficient bidding strategy is crucial. In this work, we model the auction as a <inline-formula><tex-math>$n$</tex-math></inline-formula>-player simultaneous move game with complete information and propose the first efficient bidding algorithm that tackles simultaneously its four major strategic issues: the <italic>exposure problem</i>, the <italic>own price effect</i>, <italic>budget constraints</i>, and the <italic>eligibility management problem</i>. Our solution, called <inline-formula><tex-math>$text{SMS}^alpha$</tex-math></inline-formula>, is based on simultaneous move Monte Carlo Tree Search and relies on a new method for the prediction of closing prices. By introducing a new reward function in <inline-formula><tex-math>$SMS^alpha$</tex-math></inline-formula>, we give the possibility to bidders to define their own level of risk-aversion. Through extensive numerical experiments on instances of realistic size, we show that <inline-formula><tex-math>$text{SMS}^alpha$</tex-math></inline-formula> largely outperforms state-of-the-art algorithms, notably by achieving higher expected utility while taking less risks.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"17 1","pages":"210-223"},"PeriodicalIF":1.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JaeYoung Moon, EunHye Cho, Yeabon Jo, KyungJoong Kim, Eunsung Song
{"title":"Investigating the Effect of Emotional Matching between Game and Background Music on Game Experience in a Valence–Arousal Space","authors":"JaeYoung Moon, EunHye Cho, Yeabon Jo, KyungJoong Kim, Eunsung Song","doi":"10.1109/tg.2024.3424459","DOIUrl":"https://doi.org/10.1109/tg.2024.3424459","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"369 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seung Lee;Bradford Mott;Jessica Vandenberg;Hiller A. Spires;James Lester
{"title":"Exploring Gameplay and Learning in a Narrative-Centered Digital Game for Elementary Science Education","authors":"Seung Lee;Bradford Mott;Jessica Vandenberg;Hiller A. Spires;James Lester","doi":"10.1109/TG.2024.3424689","DOIUrl":"10.1109/TG.2024.3424689","url":null,"abstract":"Recent years have seen increased exploration of the transformative potential of digital games for K-12 education. Narrative-centered digital games for learning integrate complex problem solving within compelling interactive stories. By leveraging the inherent structure of narrative and the engaging interactions afforded by commercial game engines, narrative-centered digital games for learning engage students in situated learning activities. This article presents details on the iterative design and development of a narrative-centered digital game for learning that focuses on science education for fifth-grade students. We then explore how student gameplay and learning relate by leveraging interaction log data from over 700 students playing the game. Specifically, we analyze student gameplay achievements using clustering and examine how gameplay and learning outcomes differ among the groups identified. Furthermore, we investigate if gender has an effect on student learning within the groups and what gender differences are found within the groups. The findings show that students who complete more quests and earn better in-game rewards achieve higher learning gains, and while differences exist in game playing characteristics between males and females the learning outcomes are similar.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 4","pages":"947-959"},"PeriodicalIF":1.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Markov Decision Process Based Artificial Intelligence with Card-Playing Strategy and Free-Playing Right Exploration for Four-Player Card Game Big2","authors":"Lien-Wu Chen, Yiou-Rwong Lu","doi":"10.1109/tg.2024.3424431","DOIUrl":"https://doi.org/10.1109/tg.2024.3424431","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"55 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141574663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youpeng Zhao;Yudong Lu;Jian Zhao;Wengang Zhou;Houqiang Li
{"title":"DanZero+: Dominating the GuanDan Game Through Reinforcement Learning","authors":"Youpeng Zhao;Yudong Lu;Jian Zhao;Wengang Zhou;Houqiang Li","doi":"10.1109/TG.2024.3422396","DOIUrl":"10.1109/TG.2024.3422396","url":null,"abstract":"Recent advancements have propelled artificial intelligence (AI) to showcase expertise in intricate card games, such as \u0000<italic>Mahjong</i>\u0000, \u0000<italic>DouDizhu</i>\u0000, and \u0000<italic>Texas Hold'em</i>\u0000. In this work, we aim to develop an AI program for an exceptionally complex and popular card game called \u0000<italic>GuanDan</i>\u0000. This game involves four players engaging in both competitive and cooperative play throughout a long process, posing great challenges for AI due to its expansive state and action space, long episode length, and complex rules. Employing reinforcement learning techniques, specifically deep Monte Carlo, and a distributed training framework, we first put forward an AI program named DanZero. Evaluation against baseline AI programs based on heuristic rules highlights the outstanding performance of our bot. Besides, in order to further enhance the AI's capabilities, we apply proximal policy optimization to \u0000<italic>GuanDan</i>\u0000 on the basis of Danzero. To address the challenges arising from the huge action space, which will significantly impact the performance of policy-based algorithms, we adopt the pretrained model to compress the action space and integrate action features into the model to bolster its generalization capabilities. Using these techniques, we manage to obtain a new \u0000<italic>GuanDan</i>\u0000 AI program DanZero+, which achieves a superior performance compared to DanZero.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 4","pages":"914-926"},"PeriodicalIF":1.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141547879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Reinforcement Learning to Generate Levels of Super Mario Bros. With Quality and Diversity","authors":"SangGyu Nam;Chu-Hsuan Hsueh;Pavinee Rerkjirattikal;Kokolo Ikeda","doi":"10.1109/TG.2024.3416472","DOIUrl":"10.1109/TG.2024.3416472","url":null,"abstract":"Procedural content generation (PCG) is essential in game development, automating content creation to meet various criteria such as playability, diversity, and quality. This article leverages reinforcement learning (RL) for PCG to generate \u0000<italic>Super Mario Bros.</i>\u0000 levels. We formulate the problem into a Markov decision process (MDP), with rewards defined using player enjoyment-based evaluation functions. Challenges in level representation and difficulty assessment are addressed by conditional generative adversarial networks and human-like artificial intelligence agents that mimic aspects of human input inaccuracies. This ensures that the generated levels are appropriately challenging from human perspectives. Furthermore, we enhance content quality through virtual simulation, which assigns rewards to intermediate actions to address a credit assignment problem. We also ensure diversity through a diversity-aware greedy policy, which chooses not-bad-but-distant actions based on \u0000<inline-formula><tex-math>$Q$</tex-math></inline-formula>\u0000-values. These processes ensure the production of diverse and high-quality \u0000<italic>Super Mario</i>\u0000 levels. Human subject evaluations revealed that levels generated from our approach exhibit natural connection, appropriate difficulty, nonmonotony, and diversity, highlighting the effectiveness of our proposed methods. The novelty of our work lies in the innovative solutions we propose to address challenges encountered in employing the PCG via RL method in \u0000<italic>Super Mario Bros.</i>\u0000, contributing to the field of PCG for game development.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 4","pages":"807-820"},"PeriodicalIF":1.7,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Computational Intelligence Society Information","authors":"","doi":"10.1109/TG.2024.3409129","DOIUrl":"https://doi.org/10.1109/TG.2024.3409129","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 2","pages":"C3-C3"},"PeriodicalIF":2.3,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10559941","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141334025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}