MMBench:匹配基准

Yongsheng Liu, Yanxing Qi, Jiangwei Zhang, Connie Kou, Qiaolin Chen
{"title":"MMBench:匹配基准","authors":"Yongsheng Liu, Yanxing Qi, Jiangwei Zhang, Connie Kou, Qiaolin Chen","doi":"10.1145/3539597.3573023","DOIUrl":null,"url":null,"abstract":"Video gaming has gained huge popularity over the last few decades. As reported, there are about 2.9 billion gamers globally. Among all genres, competitive games are one of the most popular ones. Matchmaking is a core problem for competitive games, which determines the player satisfaction, hence influences the game success. Most matchmaking systems group the queuing players into opposing teams with similar skill levels. The key challenge is to accurately rate the players' skills based on their match performances. There has been an increasing amount of effort on developing such rating systems such as Elo, Glicko. However, games with different game-plays might have different game modes, which might require an extensive amount of effort for rating system customization. Even though there are many rating system choices and various customization strategies, there is a clear lack of a systematic framework with which different rating systems can be analysed and compared against each other. Such a framework could help game developers to identify the bottlenecks of their matchmaking systems and enhance the performance of their matchmaking systems. To bridge the gap, we present MMBench, the first benchmark framework for evaluating different rating systems. It serves as a fair means of comparison for different rating systems and enables a deeper understanding of different rating systems. In this paper, we will present how MMBench could benchmark the three major rating systems, Elo, Glicko, Trueskill in the battle modes of 1 vs 1, n vs n, battle royal and teamed battle royal over both real and synthetic datasets.","PeriodicalId":227804,"journal":{"name":"Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MMBench: The Match Making Benchmark\",\"authors\":\"Yongsheng Liu, Yanxing Qi, Jiangwei Zhang, Connie Kou, Qiaolin Chen\",\"doi\":\"10.1145/3539597.3573023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Video gaming has gained huge popularity over the last few decades. As reported, there are about 2.9 billion gamers globally. Among all genres, competitive games are one of the most popular ones. Matchmaking is a core problem for competitive games, which determines the player satisfaction, hence influences the game success. Most matchmaking systems group the queuing players into opposing teams with similar skill levels. The key challenge is to accurately rate the players' skills based on their match performances. There has been an increasing amount of effort on developing such rating systems such as Elo, Glicko. However, games with different game-plays might have different game modes, which might require an extensive amount of effort for rating system customization. Even though there are many rating system choices and various customization strategies, there is a clear lack of a systematic framework with which different rating systems can be analysed and compared against each other. Such a framework could help game developers to identify the bottlenecks of their matchmaking systems and enhance the performance of their matchmaking systems. To bridge the gap, we present MMBench, the first benchmark framework for evaluating different rating systems. It serves as a fair means of comparison for different rating systems and enables a deeper understanding of different rating systems. In this paper, we will present how MMBench could benchmark the three major rating systems, Elo, Glicko, Trueskill in the battle modes of 1 vs 1, n vs n, battle royal and teamed battle royal over both real and synthetic datasets.\",\"PeriodicalId\":227804,\"journal\":{\"name\":\"Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3539597.3573023\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539597.3573023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在过去的几十年里,电子游戏获得了巨大的人气。据报道,全球约有29亿玩家。在所有类型中,竞争游戏是最受欢迎的类型之一。配对是竞技游戏的核心问题,它决定着玩家的满意度,从而影响着游戏的成功。大多数配对系统都将排队的玩家分成具有相似技能水平的对立团队。关键的挑战是根据球员的比赛表现准确地评估他们的技能。在开发诸如Elo、Glicko等评级系统方面,人们付出了越来越多的努力。然而,具有不同玩法的游戏可能具有不同的游戏模式,这可能需要大量的努力来定制评级系统。尽管有许多评级系统选择和各种定制策略,但显然缺乏一个系统框架来分析和比较不同的评级系统。这样的框架可以帮助游戏开发者识别匹配系统的瓶颈,并提高匹配系统的性能。为了弥补这一差距,我们提出了MMBench,这是评估不同评级系统的第一个基准框架。它为不同的评级系统提供了公平的比较手段,使人们能够更深入地了解不同的评级系统。在本文中,我们将展示MMBench如何在真实和合成数据集上对三种主要的评级系统(Elo, Glicko, Trueskill)进行基准测试,包括1对1,n对n,皇家战斗和团队皇家战斗模式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MMBench: The Match Making Benchmark
Video gaming has gained huge popularity over the last few decades. As reported, there are about 2.9 billion gamers globally. Among all genres, competitive games are one of the most popular ones. Matchmaking is a core problem for competitive games, which determines the player satisfaction, hence influences the game success. Most matchmaking systems group the queuing players into opposing teams with similar skill levels. The key challenge is to accurately rate the players' skills based on their match performances. There has been an increasing amount of effort on developing such rating systems such as Elo, Glicko. However, games with different game-plays might have different game modes, which might require an extensive amount of effort for rating system customization. Even though there are many rating system choices and various customization strategies, there is a clear lack of a systematic framework with which different rating systems can be analysed and compared against each other. Such a framework could help game developers to identify the bottlenecks of their matchmaking systems and enhance the performance of their matchmaking systems. To bridge the gap, we present MMBench, the first benchmark framework for evaluating different rating systems. It serves as a fair means of comparison for different rating systems and enables a deeper understanding of different rating systems. In this paper, we will present how MMBench could benchmark the three major rating systems, Elo, Glicko, Trueskill in the battle modes of 1 vs 1, n vs n, battle royal and teamed battle royal over both real and synthetic datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信