IEEE Transactions on Games最新文献

筛选
英文 中文
Adapter-RL: Adaptation of Any Agent Using Reinforcement Learning adapterrl:使用强化学习的任何智能体的自适应
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2026-01-12 DOI: 10.1109/TG.2025.3628606
Yizhao Jin;Gregory Slabaugh;Simon Lucas
{"title":"Adapter-RL: Adaptation of Any Agent Using Reinforcement Learning","authors":"Yizhao Jin;Gregory Slabaugh;Simon Lucas","doi":"10.1109/TG.2025.3628606","DOIUrl":"https://doi.org/10.1109/TG.2025.3628606","url":null,"abstract":"This study introduces Adapter-RL, a novel architecture aimed at improving the performance of existing agents in reinforcement learning tasks. The approach integrates human-knowledge-based systems with deep reinforcement learning, combining the interpretability and rule-based logic of the former with the adaptive learning capabilities of the latter. A crucial aspect of this method is the use of “adapters”—concise modules integrated with a base-agent, designed to adjust the policy for specific tasks. The Adapter-RL framework comprises a base-agent responsible for initial decision-making and an adapter module that refines these decisions to meet task-specific requirements. The adapter facilitates efficient training, reduces parameter requirements, and mitigates catastrophic forgetting, enhancing overall performance and adaptability. This architecture enables agents to be fine-tuned effectively, allowing them to adapt to complex tasks with rapidly changing or uncertain conditions. The research demonstrates the efficacy of Adapter-RL through experiments in microRTS, a challenging real-time strategy game. The results demonstrate that Adapter-RL significantly accelerates the training process and outperforms base-agents across various tasks, highlighting its efficiency and robustness. In addition, the study investigates the temperature coefficient tradeoff in adapter training, finding that optimal performance is achievable within a broad range of coefficients. This underscores the stability of the method. The Adapter-RL method enables the specialization of base AI for specific characters or scenarios.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"173-184"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Graph Coordination Strategy in Multiagent Reinforcement Learning 多智能体强化学习中的自适应图协调策略
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2025-11-13 DOI: 10.1109/TG.2025.3629681
Zhongwei Yu;Jingqing Ruan;Dengpeng Xing
{"title":"Adaptive Graph Coordination Strategy in Multiagent Reinforcement Learning","authors":"Zhongwei Yu;Jingqing Ruan;Dengpeng Xing","doi":"10.1109/TG.2025.3629681","DOIUrl":"https://doi.org/10.1109/TG.2025.3629681","url":null,"abstract":"Many real-world applications involve a team of agents who must coordinate each other's policies in real-time to achieve a shared goal. Previous studies mainly focus on decentralized control to maximize common rewards, with little consideration of coordination between control policies, which is critical in dynamic and complicated environments. Viewing this issue, we propose a novel adaptive graph coordination strategy that factorizes the joint policy into an adaptive graph generator and a graph-based coordinated policy. We employ a difference-aware module to control when to generate graphs and an encoder–decoder module to acquire the underlying decision graph structure. Moreover, we introduce DAGness- and DAG depth-constrained optimization to adjust the graph structure and strike a balance between efficiency and performance. We also present a graph-based coordinated policy to make asynchronous decisions based on the interagent coordination dependencies implied in the generated graph. Empirical evaluations on some cooperative multiagent environments demonstrate the superiority of the proposed method, with faster convergence and more efficient coordinated policies.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"151-162"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refining Evaluation Functions for Game 2048 by Extended Temporal Difference Learning 用扩展时间差分学习改进博弈2048的评价函数
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2025-09-04 DOI: 10.1109/TG.2025.3606517
Weikai Wang;Kiminori Matsuzaki
{"title":"Refining Evaluation Functions for Game 2048 by Extended Temporal Difference Learning","authors":"Weikai Wang;Kiminori Matsuzaki","doi":"10.1109/TG.2025.3606517","DOIUrl":"https://doi.org/10.1109/TG.2025.3606517","url":null,"abstract":"The game <italic>2048</i> has attracted millions of people with its simple yet challenging gameplay, leading to the development of numerous computer players. Most successful computer players for <italic>2048</i> use evaluation functions trained through temporal difference learning (TD learning) or its variants. While TD learning is highly effective and can improve evaluation functions quickly, the performance of these functions often plateaus after a certain number of timesteps. Therefore, it is important to refine those evaluation functions to further enhance the performance of computer players. In this article, we extend the conventional TD learning approach and propose two refinement algorithms for <italic>2048</i>. First, we conducted detailed experiments to refine the best open-source neural network, and achieved significant performance improvements, increasing the average score from <inline-formula><tex-math>$2.49 times 10^{5}$</tex-math></inline-formula> to <inline-formula><tex-math>$3.37 times 10^{5}$</tex-math></inline-formula> in greedy play (1-ply lookahead) and from <inline-formula><tex-math>$4.87 times 10^{5}$</tex-math></inline-formula> to <inline-formula><tex-math>$5.45 times 10^{5}$</tex-math></inline-formula> with 3-ply expectimax search. We also applied our refinement method to the state-of-the-art N-tuple network, improving the average score from <inline-formula><tex-math>$5.85times 10^{5}$</tex-math></inline-formula> to <inline-formula><tex-math>$6.10times 10^{5}$</tex-math></inline-formula> with 6-ply expectimax search and the tile-downgrading trick.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"56-65"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LetheVR: A First-Person Serious Game for Empathy and Public Understanding of Dementia LetheVR:一款第一人称严肃游戏,帮助公众理解痴呆症
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2025-09-23 DOI: 10.1109/TG.2025.3613758
Cheng Nie;Ding Ding;Chenjun Wu;Sijin Chen;Zhuying Li
{"title":"LetheVR: A First-Person Serious Game for Empathy and Public Understanding of Dementia","authors":"Cheng Nie;Ding Ding;Chenjun Wu;Sijin Chen;Zhuying Li","doi":"10.1109/TG.2025.3613758","DOIUrl":"https://doi.org/10.1109/TG.2025.3613758","url":null,"abstract":"Dementia, one of the leading causes of neurodegenerative mortality in older adults, remains widely misunderstood by the general public窶馬ot only stigmatized socially, but also subject to persistent misconceptions about its symptoms, progression, and lived experience. These misunderstandings hinder timely care, empathy, and social support. To address this, we introduce <italic>LetheVR</i>, a first-person serious game designed to promote both empathic understanding of individuals living with dementia and cognitive awareness of the disease itself. Targeted at general audiences, the system adopts experiential methods embedded within a game-based structure. Unlike traditional media and static educational tools, <italic>LetheVR</i> integrates immersive symptom simulation, narrative-driven gameplay, and guided pedagogical reflection to engage users in the lived experience of dementia. In a controlled study with 60 participants, <italic>LetheVR</i> significantly outperformed conventional interventions in improving measured empathy levels and symptom understanding. These findings highlight the potential of virtual reality combining with serious games and experiential methods as effective public health interventions for reshaping attitudes and correcting public misunderstandings about dementia.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"77-88"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reasoning Capabilities of Large Language Models in Games 大型语言模型在游戏中的推理能力
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2025-12-18 DOI: 10.1109/TG.2025.3645858
Christian Poglitsch;Aaron Giner;Johanna Pirker
{"title":"Reasoning Capabilities of Large Language Models in Games","authors":"Christian Poglitsch;Aaron Giner;Johanna Pirker","doi":"10.1109/TG.2025.3645858","DOIUrl":"https://doi.org/10.1109/TG.2025.3645858","url":null,"abstract":"Artificial intelligence (AI) characters in games, particularly nonplayer characters, offer exciting new possibilities for enhancing immersion. While these characters can respond to players’ questions, another challenge remains: are they capable of engaging in complex reasoning while also retaining and effectively utilizing information from past interactions? Using the memory stream of conversational agents, our aim is to explore the reasoning capabilities of various large language model, from large server-based models, such as ChatGPT-3.5, to small and local models, such as Llama 2 7b and Mistral 7B. Our objective is to evaluate AI performance in gamified environments, starting with simple reasoning tasks suited to a game context within social settings, and progressively advancing to a role-playing game that simulates a job-hiring process. The results suggest that both larger server-based models and smaller models exhibit reasoning capabilities in socially gamified scenarios. However, especially for small local models, more research is needed to enhance the quality of their performance.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"163-172"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grammar-Based Game Description Generation Using Large Language Models 使用大型语言模型生成基于语法的游戏描述
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2024-12-19 DOI: 10.1109/TG.2024.3520214
Tsunehiko Tanaka;Edgar Simo-Serra
{"title":"Grammar-Based Game Description Generation Using Large Language Models","authors":"Tsunehiko Tanaka;Edgar Simo-Serra","doi":"10.1109/TG.2024.3520214","DOIUrl":"https://doi.org/10.1109/TG.2024.3520214","url":null,"abstract":"Game description language (GDL) provides a standardized way to express diverse games in a machine-readable format, enabling automated game simulation, and evaluation. While previous research has explored game description generation using search-based methods, generating GDL descriptions from natural language remains a challenging task. This article presents a novel framework that leverages large language models (LLMs) to generate grammatically accurate game descriptions from natural language. Our approach consists of two stages: first, we gradually generate a minimal grammar based on GDL specifications; second, we iteratively improve the game description through grammar-guided generation. Our framework employs a specialized parser that identifies valid subsequences and candidate symbols from LLM responses, enabling gradual refinement of the output to ensure grammatical correctness. Experimental results demonstrate that our iterative improvement approach significantly outperforms baseline methods that directly use LLM outputs.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"30-43"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling Virtual Economic Indicators to Improve User Acquisition, Retention, and Monetization: An Application to MMOGs 揭示虚拟经济指标以提高用户获取、留存和盈利:mmog应用
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2025-10-16 DOI: 10.1109/TG.2025.3622530
Átila Moreira;Francisco Ramos;Flávia Barros;Geber Ramalho
{"title":"Unveiling Virtual Economic Indicators to Improve User Acquisition, Retention, and Monetization: An Application to MMOGs","authors":"Átila Moreira;Francisco Ramos;Flávia Barros;Geber Ramalho","doi":"10.1109/TG.2025.3622530","DOIUrl":"https://doi.org/10.1109/TG.2025.3622530","url":null,"abstract":"Games are increasingly utilizing economic incentives to enhance key performance metrics associated with user acquisition, retention, and monetization (ARM) funnel. Moreover, certain games take this approach further by establishing virtual economies, thereby fostering a heightened sense of ownership among players (Castronova, 2008). Unfortunately, dealing with virtual economies brings new challenges for game operators, such as the identification and management of the key factors affecting the economy and consequently users’ satisfaction. It is not simple to unveil relevant economic indicators for virtual economies since: 1) there is a great amount and diversity of potentially relevant variables and 2) current models for virtual economies do not address this issue. Departing from the (real) economics, we propose six novel economic indicators for virtual economies, since indicators are useful tools for managing the complexity of real economies. After a preliminary validation of these indicators with experts, we implemented them through a visual dashboard to assist operators in making economic decisions within a commercial massively multiplayer online game: <italic>With Your Destiny</i>. We collected data over a one-year period, encompassing a sample of 416 000 unique players. The results indicate significant improvements in the ARM metrics, alongside enhancements in customer service and operational efficiency. Beyond the proposed indicators, a broader contribution of this work lies in reinforcing the profitable intersection between longstanding economic principles and modern game operations.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"101-114"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Computational Intelligence Society Information IEEE计算智能学会信息
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2026-03-18 DOI: 10.1109/TG.2026.3665427
{"title":"IEEE Computational Intelligence Society Information","authors":"","doi":"10.1109/TG.2026.3665427","DOIUrl":"https://doi.org/10.1109/TG.2026.3665427","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"C3-C3"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11442849","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Games Publication Information IEEE游戏出版信息汇刊
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2026-03-18 DOI: 10.1109/TG.2026.3665425
{"title":"IEEE Transactions on Games Publication Information","authors":"","doi":"10.1109/TG.2026.3665425","DOIUrl":"https://doi.org/10.1109/TG.2026.3665425","url":null,"abstract":"","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"C2-C2"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11442850","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Scale-Independent UCT Exploration Factor Strategies 研究与规模无关的UCT勘探因子策略
IF 2.8 4区 计算机科学
IEEE Transactions on Games Pub Date : 2026-03-01 Epub Date: 2025-11-17 DOI: 10.1109/TG.2025.3634024
Robin Schmöcker;Christoph Schnell;Alexander Dockhorn
{"title":"Investigating Scale-Independent UCT Exploration Factor Strategies","authors":"Robin Schmöcker;Christoph Schnell;Alexander Dockhorn","doi":"10.1109/TG.2025.3634024","DOIUrl":"https://doi.org/10.1109/TG.2025.3634024","url":null,"abstract":"The upper confidence bounds for trees (UCT) algorithm is not agnostic to the reward scale of the game it is applied to. For zero-sum games with the sparse rewards of <inline-formula><tex-math>$lbrace -1,0,1rbrace$</tex-math></inline-formula> at the end of the game, this is not a problem, but many games often feature dense rewards with hand-picked reward scales, causing a node's Q-value to span different magnitudes across different games. In this article, we evaluate various strategies for adaptively choosing the UCT exploration constant <inline-formula><tex-math>$lambda$</tex-math></inline-formula>, called <inline-formula><tex-math>$lambda$</tex-math></inline-formula>-strategies, which are agnostic to the game's reward scale. These <inline-formula><tex-math>$lambda$</tex-math></inline-formula>-strategies include those proposed in the literature as well as five new strategies. Given our experimental results, we recommend using one of our newly suggested <inline-formula><tex-math>$lambda$</tex-math></inline-formula>-strategies, which is to choose <inline-formula><tex-math>$lambda$</tex-math></inline-formula> as <inline-formula><tex-math>$2 cdot sigma$</tex-math></inline-formula>, where <inline-formula><tex-math>$sigma$</tex-math></inline-formula> is the empirical standard deviation of all state–action pairs' Q-values of the search tree. This method outperforms existing <inline-formula><tex-math>$lambda$</tex-math></inline-formula>-strategies across a wide range of tasks both in terms of a single parameter value and the peak performances obtained by optimizing all available parameters.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"18 1","pages":"211-216"},"PeriodicalIF":2.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147557981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书