{"title":"Meterstick: Benchmarking Performance Variability in Cloud and Self-hosted Minecraft-like Games","authors":"Jerrit Eickhoff, Jesse Donkervliet, A. Iosup","doi":"10.1145/3578244.3583724","DOIUrl":null,"url":null,"abstract":"One of the most popular types of online games is the Minecraft-like Game (MLG), in which players can terraform the environment. MLGs currently support their many players by replicating isolated instances with limited scalability. We posit that performance variability is a key cause for the lack of scalability in MLGs and design the first benchmark that focuses on MLG performance variability, identifying specialized workloads, metrics, and processes. We conduct real-world benchmarking of MLGs, both cloud-based and self-hosted. We find environment-based workloads and cloud deployment are significant sources of performance variability: peak-latency degrades sharply to 20.7 times the arithmetic mean, and exceeds by a factor of 7.4 the performance requirements.","PeriodicalId":115391,"journal":{"name":"2022 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3578244.3583724","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
One of the most popular types of online games is the Minecraft-like Game (MLG), in which players can terraform the environment. MLGs currently support their many players by replicating isolated instances with limited scalability. We posit that performance variability is a key cause for the lack of scalability in MLGs and design the first benchmark that focuses on MLG performance variability, identifying specialized workloads, metrics, and processes. We conduct real-world benchmarking of MLGs, both cloud-based and self-hosted. We find environment-based workloads and cloud deployment are significant sources of performance variability: peak-latency degrades sharply to 20.7 times the arithmetic mean, and exceeds by a factor of 7.4 the performance requirements.