Bot Believability Assessment: A Novel Protocol & Analysis of Judge Expertise

Cindy Even, Anne-Gwenn Bosser, Cédric Buche
{"title":"Bot Believability Assessment: A Novel Protocol & Analysis of Judge Expertise","authors":"Cindy Even, Anne-Gwenn Bosser, Cédric Buche","doi":"10.1109/CW.2018.00027","DOIUrl":null,"url":null,"abstract":"For video game designers, being able to provide both interesting and human-like opponents is a definite benefit to the game's entertainment value. The development of such believable virtual players also known as Non-Player Characters or bots remains a challenge which has kept the research community busy for many years. However, evaluation methods vary widely which can make systems difficult to compare. The BotPrize competition has provided some highly regarded assessment methods for comparing bots' believability in a first person shooter game. It involves humans judging virtual agents competing for the most believable bot title. In this paper, we describe a system allowing us to partly automate such a competition, a novel evaluation protocol based on an early version of the BotPrize, and an analysis of the data we collected regarding human judges during a national event. We observed that the best judges were those who play video games the most often, especially games involving combat, and are used to playing against virtual players, strangers and physically present players. This result is a starting point for the design of a new generic and rigorous protocol for the evaluation of bots' believability in first person shooter games.","PeriodicalId":388539,"journal":{"name":"2018 International Conference on Cyberworlds (CW)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Cyberworlds (CW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CW.2018.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

For video game designers, being able to provide both interesting and human-like opponents is a definite benefit to the game's entertainment value. The development of such believable virtual players also known as Non-Player Characters or bots remains a challenge which has kept the research community busy for many years. However, evaluation methods vary widely which can make systems difficult to compare. The BotPrize competition has provided some highly regarded assessment methods for comparing bots' believability in a first person shooter game. It involves humans judging virtual agents competing for the most believable bot title. In this paper, we describe a system allowing us to partly automate such a competition, a novel evaluation protocol based on an early version of the BotPrize, and an analysis of the data we collected regarding human judges during a national event. We observed that the best judges were those who play video games the most often, especially games involving combat, and are used to playing against virtual players, strangers and physically present players. This result is a starting point for the design of a new generic and rigorous protocol for the evaluation of bots' believability in first person shooter games.
机器人可信度评估:一种新的方案&法官鉴定分析
对于电子游戏设计师来说,能够提供有趣且类似人类的对手无疑是游戏娱乐价值的一大优势。这种可信的虚拟玩家(也称为非玩家角色或机器人)的开发仍然是一个挑战,这让研究界忙碌了多年。然而,评估方法差异很大,这使得系统难以比较。BotPrize竞赛为比较第一人称射击游戏中机器人的可信度提供了一些备受推崇的评估方法。它涉及到人类判断虚拟代理竞争最可信的机器人标题。在本文中,我们描述了一个允许我们部分自动化这样的比赛的系统,一个基于BotPrize早期版本的新评估协议,以及我们在国家活动中收集的关于人类裁判的数据的分析。我们观察到,最好的裁判是那些经常玩电子游戏的人,特别是那些涉及战斗的游戏,并且习惯于与虚拟玩家,陌生人和实际在场的玩家进行游戏。这一结果为第一人称射击游戏中评估bot可信度的新通用和严格协议的设计提供了起点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信