IEEE Transactions on Games最新文献

筛选
英文 中文
BaziGooshi: A Hybrid Model of Reinforcement Learning for Generalization in Gameplay BaziGooshi:游戏泛化的强化学习混合模型
IF 1.7 4区 计算机科学
IEEE Transactions on Games Pub Date : 2024-01-31 DOI: 10.1109/TG.2024.3355172
Sara Karimi;Sahar Asadi;Amir H. Payberah
{"title":"BaziGooshi: A Hybrid Model of Reinforcement Learning for Generalization in Gameplay","authors":"Sara Karimi;Sahar Asadi;Amir H. Payberah","doi":"10.1109/TG.2024.3355172","DOIUrl":"10.1109/TG.2024.3355172","url":null,"abstract":"While reinforcement learning (RL) is gaining popularity in gameplay, creating a generalized RL model is still challenging. This study presents \u0000<sc>BaziGooshi</small>\u0000, a generalized RL solution for games, focusing on two different types of games: 1) a puzzle game \u0000<italic>Candy Crush Friends Saga</i>\u0000 and 2) a platform game \u0000<italic>Sonic the Hedgehog Genesis</i>\u0000. \u0000<sc>BaziGooshi</small>\u0000 rewards RL agents for mastering a set of intrinsic basic skills as well as achieving the game objectives. The solution includes a hybrid model that takes advantage of a combination of several agents pretrained using intrinsic or extrinsic rewards to determine the actions. We propose an RL-based method for assigning weights to the pretrained agents. Through experiments, we show that the RL-based approach improves generalization to unseen levels, and \u0000<sc>BaziGooshi</small>\u0000 surpasses the performance of most of the defined baselines in both games. Also, we perform additional experiments to investigate further the impacts of using intrinsic rewards and the effects of using different combinations in the proposed hybrid models.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"722-734"},"PeriodicalIF":1.7,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rulebook: An Architectural Pattern for Self-Amending Mechanics in Digital Games 规则手册:数字游戏中自我完善机制的架构模式
IF 1.7 4区 计算机科学
IEEE Transactions on Games Pub Date : 2024-01-29 DOI: 10.1109/TG.2024.3359439
Wilson Kazuo Mizutani;Fabio Kon
{"title":"Rulebook: An Architectural Pattern for Self-Amending Mechanics in Digital Games","authors":"Wilson Kazuo Mizutani;Fabio Kon","doi":"10.1109/TG.2024.3359439","DOIUrl":"10.1109/TG.2024.3359439","url":null,"abstract":"Mechanics are one of the pillars of gameplay, enabled by the underlying implementation of the game and subject to constant changes during development. In particular, self-amending mechanics adjust themselves dynamically and are a common source of coupled code. The \u0000<italic>Rulebook</i>\u0000 is an architectural pattern that generalizes how developers prevent coupled code in self-amending mechanics, based on a careful research process including a systematic literature review, semistructured interviews with professional developers, and quasi-experiments. The pattern codifies changes to the game state as “effect” objects, which it matches against a dynamic pool of rules. Each rule may amend, resolve, or chain effects. By preventing the control flow of the game from becoming coupled to the specific interactions of mechanics while also promoting an extensible and flexible structure for self-amendment, our solution reduces the time developers need to iterate on the design of mechanics. This article details the \u0000<italic>Rulebook</i>\u0000 pattern and presents a case study demonstrating its design process in three different implementations of open-source jam games. Together with the typification of self-amending mechanics, this article formalizes a novel, state-of-the-art toolset for architecting games.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"711-721"},"PeriodicalIF":1.7,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Countdown VR: a Serious Game in Virtual Reality to Develop Mental Computation Skills 倒计时 VR:培养心理计算能力的虚拟现实严肃游戏
IF 2.3 4区 计算机科学
IEEE Transactions on Games Pub Date : 2024-01-23 DOI: 10.1109/tg.2024.3357452
Hubert Cecotti, Mathilde Leray, Michael Callaghan
{"title":"Countdown VR: a Serious Game in Virtual Reality to Develop Mental Computation Skills","authors":"Hubert Cecotti, Mathilde Leray, Michael Callaghan","doi":"10.1109/tg.2024.3357452","DOIUrl":"https://doi.org/10.1109/tg.2024.3357452","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"174 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Systematic Review of Model-Driven Game Development Studies 对模型驱动游戏开发研究的系统回顾
IF 2.3 4区 计算机科学
IEEE Transactions on Games Pub Date : 2024-01-19 DOI: 10.1109/tg.2024.3356408
Amirreza Payandeh, Mohammadreza Sharbaf, Shekoufeh Kolahdouz Rahimi
{"title":"A Systematic Review of Model-Driven Game Development Studies","authors":"Amirreza Payandeh, Mohammadreza Sharbaf, Shekoufeh Kolahdouz Rahimi","doi":"10.1109/tg.2024.3356408","DOIUrl":"https://doi.org/10.1109/tg.2024.3356408","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"10 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Searching Bug Instances in Gameplay Video Repositories 在游戏视频库中搜索错误实例
IF 1.7 4区 计算机科学
IEEE Transactions on Games Pub Date : 2024-01-17 DOI: 10.1109/TG.2024.3355285
Mohammad Reza Taesiri;Finlay Macklon;Sarra Habchi;Cor-Paul Bezemer
{"title":"Searching Bug Instances in Gameplay Video Repositories","authors":"Mohammad Reza Taesiri;Finlay Macklon;Sarra Habchi;Cor-Paul Bezemer","doi":"10.1109/TG.2024.3355285","DOIUrl":"10.1109/TG.2024.3355285","url":null,"abstract":"Gameplay videos offer valuable insights into player interactions and game responses, particularly data about game bugs. Despite the abundance of gameplay videos online, extracting useful information remains a challenge. This article introduces a method for searching and extracting relevant videos from extensive video repositories using English text queries. Our approach requires no external information, like video metadata; it solely depends on video content. Leveraging the zero-shot transfer capabilities of the contrastive language–image pretraining model, our approach does not require any data labeling or training. To evaluate our approach, we present the \u0000<monospace>GamePhysics</monospace>\u0000 dataset, comprising 26 954 videos from 1873 games that were collected from the \u0000<uri>GamePhysics</uri>\u0000 section on the Reddit website. Our approach shows promising results in our extensive analysis of simple and compound queries, indicating that our method is useful for detecting objects and events in gameplay videos. Moreover, we assess the effectiveness of our method by analyzing a carefully annotated dataset of 220 gameplay videos. The results of our study demonstrate the potential of our approach for applications, such as the creation of a video search tool tailored to identifying video game bugs, which could greatly benefit quality assurance teams in finding and reproducing bugs.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"697-710"},"PeriodicalIF":1.7,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Piecing Together Performance: Collaborative, Participatory Research-Through-Design for Better Diversity in Games 拼凑表演:合作式、参与式研究--通过设计改善游戏的多样性
IF 1.7 4区 计算机科学
IEEE Transactions on Games Pub Date : 2024-01-02 DOI: 10.1109/TG.2023.3349369
Daniel L. Gardner;LouAnne Boyd;Reginald T. Gardner
{"title":"Piecing Together Performance: Collaborative, Participatory Research-Through-Design for Better Diversity in Games","authors":"Daniel L. Gardner;LouAnne Boyd;Reginald T. Gardner","doi":"10.1109/TG.2023.3349369","DOIUrl":"10.1109/TG.2023.3349369","url":null,"abstract":"Digital games are a multi-billion-dollar industry whose production and consumption extend globally. Representation in games is an increasingly important topic. As those who create and consume the medium grow ever more diverse, it is essential that player or user-experience research, usability, and any consideration of how people interface with their technology are exercised through inclusive and intersectional lenses. Previous research has identified how character configuration interfaces preface white-male defaults (Gardner and Tanenbaum 2018), (Gardner and Tanenbaum 2021), and (Mastro and Behm-Morawitz 2005). This study relies on 1-on-1 play interviews where diverse participants attempt to create “themselves” in a series of games and on group design activities to explore how participants may envision more inclusive character configuration interface design. Our interview findings describe specific points of tension in the process of creating characters in existing interfaces and the sketches participants–collaborators produced to challenge the homogeneity of current interface designs. This project amplifies the perspective of diverse participants–collaborators to provide constructive implications and a series of \u0000<italic>principles</i>\u0000 for designing more inclusive character configuration interfaces, which support more diverse stories and gameworlds by reconfiguring the constraints that shape those stories and gameworlds.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"683-696"},"PeriodicalIF":1.7,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video-Based Engagement Estimation of Game Streamers: An Interpretable Multimodal Neural Network Approach 基于视频的游戏流用户参与度评估:可解释的多模态神经网络方法
IF 2.3 4区 计算机科学
IEEE Transactions on Games Pub Date : 2023-12-29 DOI: 10.1109/tg.2023.3348230
Sicheng Pan, Gary J.W. Xu, Kun Guo, Seop Hyeong Park, Hongliang Ding
{"title":"Video-Based Engagement Estimation of Game Streamers: An Interpretable Multimodal Neural Network Approach","authors":"Sicheng Pan, Gary J.W. Xu, Kun Guo, Seop Hyeong Park, Hongliang Ding","doi":"10.1109/tg.2023.3348230","DOIUrl":"https://doi.org/10.1109/tg.2023.3348230","url":null,"abstract":"In this paper, we propose a non-intrusive and nonrestrictive multimodal deep learning model for estimating the engagement levels of game streamers. We incorporate three modalities from the streamers' videos (facial, pixel, and audio information) to train the multimodal neural network. Additionally, we introduce a novel interpretation technique that directly calculates the contribution of each modality to the model's classification performance without the need to retrain single modality models. Experimental results demonstrate that our model achieves an accuracy of 77.2% on the test set, with the sound modality identified as a key modality for engagement estimation. By utilizing the proposed interpretation technique, we further analyze the modality contributions of the model in handling different categories and samples from various players. This enhances the model's interpretability and reveals its limitations, as well as future directions for improvement. The proposed approach and findings have potential applications in the fields of game streaming and audience analysis, as well as in domains related to multimodal learning and affective computing.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"7 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent Combinational Game Design 潜在组合游戏设计
IF 1.7 4区 计算机科学
IEEE Transactions on Games Pub Date : 2023-12-25 DOI: 10.1109/TG.2023.3346331
Anurag Sarkar;Seth Cooper
{"title":"Latent Combinational Game Design","authors":"Anurag Sarkar;Seth Cooper","doi":"10.1109/TG.2023.3346331","DOIUrl":"10.1109/TG.2023.3346331","url":null,"abstract":"We present \u0000<italic>latent combinational game design</i>\u0000—an approach for generating playable games that blend a given set of games in a desired combination using deep generative latent variable models. We use Gaussian mixture variational autoencoders (GMVAEs), which model the VAE latent space via a mixture of Gaussian components. Through supervised training, each component encodes levels from one game and lets us define blended games as linear combinations of these components. This enables generating new games that blend the input games as well as controlling the relative proportions of each game in the blend. We also extend prior blending work using conditional VAEs and compare against the GMVAE and additionally introduce a hybrid conditional GMVAE architecture, which lets us generate whole blended levels and layouts. Results show that these approaches can generate playable games that blend the input games in specified combinations. We use both platformers and dungeon-based games to demonstrate our results.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"659-669"},"PeriodicalIF":1.7,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCMA: An Adaptive Multiagent Reinforcement Learning Framework With Group Communication for Complex and Similar Tasks Coordination GCMA:一种具有群组交流功能的自适应多代理强化学习框架,用于协调复杂和类似任务
IF 1.7 4区 计算机科学
IEEE Transactions on Games Pub Date : 2023-12-25 DOI: 10.1109/TG.2023.3346394
Kexing Peng;Tinghuai Ma;Xin Yu;Huan Rong;Yurong Qian;Najla Al-Nabhan
{"title":"GCMA: An Adaptive Multiagent Reinforcement Learning Framework With Group Communication for Complex and Similar Tasks Coordination","authors":"Kexing Peng;Tinghuai Ma;Xin Yu;Huan Rong;Yurong Qian;Najla Al-Nabhan","doi":"10.1109/TG.2023.3346394","DOIUrl":"10.1109/TG.2023.3346394","url":null,"abstract":"Coordinating multiple agents with diverse tasks and changing goals without interference is a challenge. Multiagent reinforcement learning (MARL) aims to develop effective communication and joint policies using group learning. Some of the previous approaches required each agent to maintain a set of networks independently, resulting in no consideration of interactions. Joint communication work causes agents receiving information unrelated to their own tasks. Currently, agents with different task divisions are often grouped by action tendency, but this can lead to poor dynamic grouping. This article presents a two-phase solution for multiple agents, addressing these issues. The first phase develops heterogeneous agent communication joint policies using a group communication MARL framework (GCMA). The framework employs a periodic grouping strategy, reducing exploration and communication redundancy by dynamically assigning agent group hidden features through hypernetwork and graph communication. The scheme efficiently utilizes resources for adapting to multiple similar tasks. In the second phase, each agent's policy network is distilled into a generalized simple network, adapting to similar tasks with varying quantities and sizes. GCMA is tested in complex environments, such as \u0000<italic>StarCraft II</i>\u0000 and unmanned aerial vehicle (UAV) take-off, showing its well-performing for large-scale, coordinated tasks. It shows GCMA's effectiveness for solid generalization in multitask tests with simulated pedestrians.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 3","pages":"670-682"},"PeriodicalIF":1.7,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Games Publication Information IEEE 游戏论文集》出版信息
IF 2.3 4区 计算机科学
IEEE Transactions on Games Pub Date : 2023-12-15 DOI: 10.1109/TG.2023.3340069
{"title":"IEEE Transactions on Games Publication Information","authors":"","doi":"10.1109/TG.2023.3340069","DOIUrl":"https://doi.org/10.1109/TG.2023.3340069","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"15 4","pages":"C2-C2"},"PeriodicalIF":2.3,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10361584","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138678574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信