{"title":"机器人可信度评估:一种新的方案&法官鉴定分析","authors":"Cindy Even, Anne-Gwenn Bosser, Cédric Buche","doi":"10.1109/CW.2018.00027","DOIUrl":null,"url":null,"abstract":"For video game designers, being able to provide both interesting and human-like opponents is a definite benefit to the game's entertainment value. The development of such believable virtual players also known as Non-Player Characters or bots remains a challenge which has kept the research community busy for many years. However, evaluation methods vary widely which can make systems difficult to compare. The BotPrize competition has provided some highly regarded assessment methods for comparing bots' believability in a first person shooter game. It involves humans judging virtual agents competing for the most believable bot title. In this paper, we describe a system allowing us to partly automate such a competition, a novel evaluation protocol based on an early version of the BotPrize, and an analysis of the data we collected regarding human judges during a national event. We observed that the best judges were those who play video games the most often, especially games involving combat, and are used to playing against virtual players, strangers and physically present players. This result is a starting point for the design of a new generic and rigorous protocol for the evaluation of bots' believability in first person shooter games.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Bot Believability Assessment: A Novel Protocol & Analysis of Judge Expertise\",\"authors\":\"Cindy Even, Anne-Gwenn Bosser, Cédric Buche\",\"doi\":\"10.1109/CW.2018.00027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For video game designers, being able to provide both interesting and human-like opponents is a definite benefit to the game's entertainment value. The development of such believable virtual players also known as Non-Player Characters or bots remains a challenge which has kept the research community busy for many years. However, evaluation methods vary widely which can make systems difficult to compare. The BotPrize competition has provided some highly regarded assessment methods for comparing bots' believability in a first person shooter game. It involves humans judging virtual agents competing for the most believable bot title. In this paper, we describe a system allowing us to partly automate such a competition, a novel evaluation protocol based on an early version of the BotPrize, and an analysis of the data we collected regarding human judges during a national event. We observed that the best judges were those who play video games the most often, especially games involving combat, and are used to playing against virtual players, strangers and physically present players. This result is a starting point for the design of a new generic and rigorous protocol for the evaluation of bots' believability in first person shooter games.\",\"PeriodicalId\":388539,\"journal\":{\"name\":\"2018 International Conference on Cyberworlds (CW)\",\"volume\":\"67 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 International Conference on Cyberworlds (CW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CW.2018.00027\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Cyberworlds (CW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CW.2018.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bot Believability Assessment: A Novel Protocol & Analysis of Judge Expertise
For video game designers, being able to provide both interesting and human-like opponents is a definite benefit to the game's entertainment value. The development of such believable virtual players also known as Non-Player Characters or bots remains a challenge which has kept the research community busy for many years. However, evaluation methods vary widely which can make systems difficult to compare. The BotPrize competition has provided some highly regarded assessment methods for comparing bots' believability in a first person shooter game. It involves humans judging virtual agents competing for the most believable bot title. In this paper, we describe a system allowing us to partly automate such a competition, a novel evaluation protocol based on an early version of the BotPrize, and an analysis of the data we collected regarding human judges during a national event. We observed that the best judges were those who play video games the most often, especially games involving combat, and are used to playing against virtual players, strangers and physically present players. This result is a starting point for the design of a new generic and rigorous protocol for the evaluation of bots' believability in first person shooter games.