{"title":"过山车的程序生成","authors":"Jonathan Campbell;Clark Verbrugge","doi":"10.1109/TG.2024.3404001","DOIUrl":null,"url":null,"abstract":"The \n<italic>RollerCoaster Tycoon</i>\n video game involves creating rollercoaster tracks that optimize for various game metrics while also being constrained by the need to ensure a feasible structure in terms of physical and spatial bounds. Creating these procedurally is, thus, a challenge. In this work, we explore multiple approaches to rollercoaster track generation through the use of Markov chains and various deep learning methods. We show that we can achieve relatively good tracks in terms of the game's measurement of success and that reinforcement learning allows for more control of the generated tracks and for different rider experiences. A focus on multiple measures allows our work to extend to other track properties drawn from real-world research. This article extends a previous publication by adding a new reward function for our reinforcement learning agent as well as further analyses of the generated tracks, including a metric measuring rider excitement over time, a revised novelty metric, and an analysis of controllability.","PeriodicalId":55977,"journal":{"name":"IEEE Transactions on Games","volume":"16 4","pages":"882-891"},"PeriodicalIF":1.7000,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Procedural Generation of Rollercoasters\",\"authors\":\"Jonathan Campbell;Clark Verbrugge\",\"doi\":\"10.1109/TG.2024.3404001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The \\n<italic>RollerCoaster Tycoon</i>\\n video game involves creating rollercoaster tracks that optimize for various game metrics while also being constrained by the need to ensure a feasible structure in terms of physical and spatial bounds. Creating these procedurally is, thus, a challenge. In this work, we explore multiple approaches to rollercoaster track generation through the use of Markov chains and various deep learning methods. We show that we can achieve relatively good tracks in terms of the game's measurement of success and that reinforcement learning allows for more control of the generated tracks and for different rider experiences. A focus on multiple measures allows our work to extend to other track properties drawn from real-world research. This article extends a previous publication by adding a new reward function for our reinforcement learning agent as well as further analyses of the generated tracks, including a metric measuring rider excitement over time, a revised novelty metric, and an analysis of controllability.\",\"PeriodicalId\":55977,\"journal\":{\"name\":\"IEEE Transactions on Games\",\"volume\":\"16 4\",\"pages\":\"882-891\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Games\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10536608/\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Games","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10536608/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
The
RollerCoaster Tycoon
video game involves creating rollercoaster tracks that optimize for various game metrics while also being constrained by the need to ensure a feasible structure in terms of physical and spatial bounds. Creating these procedurally is, thus, a challenge. In this work, we explore multiple approaches to rollercoaster track generation through the use of Markov chains and various deep learning methods. We show that we can achieve relatively good tracks in terms of the game's measurement of success and that reinforcement learning allows for more control of the generated tracks and for different rider experiences. A focus on multiple measures allows our work to extend to other track properties drawn from real-world research. This article extends a previous publication by adding a new reward function for our reinforcement learning agent as well as further analyses of the generated tracks, including a metric measuring rider excitement over time, a revised novelty metric, and an analysis of controllability.