基于大规模行为克隆的《反恐精英》死亡竞赛

Tim Pearce, Jun Zhu
{"title":"基于大规模行为克隆的《反恐精英》死亡竞赛","authors":"Tim Pearce, Jun Zhu","doi":"10.1109/CoG51982.2022.9893617","DOIUrl":null,"url":null,"abstract":"This paper describes an AI agent that plays the modern first-person-shooter (FPS) video game ‘Counter-Strike; Global Offensive’ (CSGO) from pixel input. The agent, a deep neural network, matches the performance of a casual human gamer on the deathmatch game mode whilst adopting a humanlike play style. Much previous research has focused on games with convenient APIs and low-resolution graphics, allowing them to be run cheaply at scale. This is not the case for CSGO, with system requirements orders of magnitude higher than previously studied FPS games. This limits the quantity of on-policy data that can be generated, precluding pure reward-driven reinforcement learning (RL) algorithms. Our solution uses a two-stage behavioural cloning methodology; 1) Pre-train on a large dataset scraped from human play on public servers (5.5 million frames or 95 hours) where actions are labelled in an automated way. 2) Fine-tune on a small dataset of clean expert demonstrations (190 thousand frames or 3 hours). This scale is an order of magnitude larger than prior work on imitation learning in FPS games, whilst being far more data efficient than pure RL algorithms. Video introduction: https://youtu.be/rnz3lmfSHv0 Code, model & datasets: https://github.com/TeaPearce","PeriodicalId":394281,"journal":{"name":"2022 IEEE Conference on Games (CoG)","volume":"96 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Counter-Strike Deathmatch with Large-Scale Behavioural Cloning\",\"authors\":\"Tim Pearce, Jun Zhu\",\"doi\":\"10.1109/CoG51982.2022.9893617\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes an AI agent that plays the modern first-person-shooter (FPS) video game ‘Counter-Strike; Global Offensive’ (CSGO) from pixel input. The agent, a deep neural network, matches the performance of a casual human gamer on the deathmatch game mode whilst adopting a humanlike play style. Much previous research has focused on games with convenient APIs and low-resolution graphics, allowing them to be run cheaply at scale. This is not the case for CSGO, with system requirements orders of magnitude higher than previously studied FPS games. This limits the quantity of on-policy data that can be generated, precluding pure reward-driven reinforcement learning (RL) algorithms. Our solution uses a two-stage behavioural cloning methodology; 1) Pre-train on a large dataset scraped from human play on public servers (5.5 million frames or 95 hours) where actions are labelled in an automated way. 2) Fine-tune on a small dataset of clean expert demonstrations (190 thousand frames or 3 hours). This scale is an order of magnitude larger than prior work on imitation learning in FPS games, whilst being far more data efficient than pure RL algorithms. Video introduction: https://youtu.be/rnz3lmfSHv0 Code, model & datasets: https://github.com/TeaPearce\",\"PeriodicalId\":394281,\"journal\":{\"name\":\"2022 IEEE Conference on Games (CoG)\",\"volume\":\"96 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Conference on Games (CoG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CoG51982.2022.9893617\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Games (CoG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CoG51982.2022.9893617","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

摘要

本文描述了一个玩现代第一人称射击(FPS)电子游戏《反恐精英》的AI代理;全球攻势' (CSGO)从像素输入。代理是一个深度神经网络,在采用类似人类的游戏风格的同时,在死亡竞赛游戏模式中与休闲人类玩家的表现相匹配。之前的许多研究都集中在具有便捷api和低分辨率图像的游戏上,从而使它们能够以较低的成本大规模运行。但《CSGO》并非如此,它的系统要求比之前研究过的FPS游戏要高。这限制了可以生成的策略数据的数量,排除了纯粹的奖励驱动强化学习(RL)算法。我们的解决方案使用两阶段行为克隆方法;1)在公共服务器(550万帧或95小时)上从人类游戏中抓取的大型数据集上进行预训练,其中动作以自动方式标记。2)在干净的专家演示的小数据集上进行微调(19万帧或3小时)。这个规模比之前在FPS游戏中的模仿学习工作大了一个数量级,同时比纯RL算法的数据效率高得多。视频介绍:https://youtu.be/rnz3lmfSHv0代码、模型和数据集:https://github.com/TeaPearce
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Counter-Strike Deathmatch with Large-Scale Behavioural Cloning
This paper describes an AI agent that plays the modern first-person-shooter (FPS) video game ‘Counter-Strike; Global Offensive’ (CSGO) from pixel input. The agent, a deep neural network, matches the performance of a casual human gamer on the deathmatch game mode whilst adopting a humanlike play style. Much previous research has focused on games with convenient APIs and low-resolution graphics, allowing them to be run cheaply at scale. This is not the case for CSGO, with system requirements orders of magnitude higher than previously studied FPS games. This limits the quantity of on-policy data that can be generated, precluding pure reward-driven reinforcement learning (RL) algorithms. Our solution uses a two-stage behavioural cloning methodology; 1) Pre-train on a large dataset scraped from human play on public servers (5.5 million frames or 95 hours) where actions are labelled in an automated way. 2) Fine-tune on a small dataset of clean expert demonstrations (190 thousand frames or 3 hours). This scale is an order of magnitude larger than prior work on imitation learning in FPS games, whilst being far more data efficient than pure RL algorithms. Video introduction: https://youtu.be/rnz3lmfSHv0 Code, model & datasets: https://github.com/TeaPearce
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信