{"title":"用于多层应用程序性能预测的M3(度量-度量-模型)工具链","authors":"Devidas Gawali, V. Apte","doi":"10.1145/2945408.2945414","DOIUrl":null,"url":null,"abstract":"Performance prediction of multi-tier applications is a critical step in the life-cycle of an application. However, the target hardware platform on which performance prediction is re- quired is often different from the testbed one on which the application performance can be measured, and is usually un- available for deployment and load testing of the application. In this paper, we present M3 , our Measure-Measure-Model method, which uses a pipeline of three tools to solve this problem. The tool-chain starts with AutoPerf, which mea- sures the CPU service demands of the application on the testbed. CloneGen then takes this and the number and size of network calls as input and generates a clone, whose CPU service demand matches the application’s. This clone is then deployed on the target, instead of the original application, since its code is simple, does not need a full database, and is thus easier to install. AutoPerf is used again to measure CPU service demand of the clone on the target, under light load generation. Finally, this service demand is fed into PerfCenter which is a multi-tier application performance modeling tool, which can then predict the application per- formance on the target under any workload. We validated the predictions made using the M3 tool-chain against direct measurement made on two applications - DellDVD and RU- BiS, on various combinations of testbed and target platforms (Intel and AMD servers) and found that in almost all cases, prediction error was less than 20%.","PeriodicalId":240965,"journal":{"name":"Proceedings of the 2nd International Workshop on Quality-Aware DevOps","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"The M3 (measure-measure-model) tool-chain for performance prediction of multi-tier applications\",\"authors\":\"Devidas Gawali, V. Apte\",\"doi\":\"10.1145/2945408.2945414\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Performance prediction of multi-tier applications is a critical step in the life-cycle of an application. However, the target hardware platform on which performance prediction is re- quired is often different from the testbed one on which the application performance can be measured, and is usually un- available for deployment and load testing of the application. In this paper, we present M3 , our Measure-Measure-Model method, which uses a pipeline of three tools to solve this problem. The tool-chain starts with AutoPerf, which mea- sures the CPU service demands of the application on the testbed. CloneGen then takes this and the number and size of network calls as input and generates a clone, whose CPU service demand matches the application’s. This clone is then deployed on the target, instead of the original application, since its code is simple, does not need a full database, and is thus easier to install. AutoPerf is used again to measure CPU service demand of the clone on the target, under light load generation. Finally, this service demand is fed into PerfCenter which is a multi-tier application performance modeling tool, which can then predict the application per- formance on the target under any workload. We validated the predictions made using the M3 tool-chain against direct measurement made on two applications - DellDVD and RU- BiS, on various combinations of testbed and target platforms (Intel and AMD servers) and found that in almost all cases, prediction error was less than 20%.\",\"PeriodicalId\":240965,\"journal\":{\"name\":\"Proceedings of the 2nd International Workshop on Quality-Aware DevOps\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2nd International Workshop on Quality-Aware DevOps\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2945408.2945414\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Workshop on Quality-Aware DevOps","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2945408.2945414","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The M3 (measure-measure-model) tool-chain for performance prediction of multi-tier applications
Performance prediction of multi-tier applications is a critical step in the life-cycle of an application. However, the target hardware platform on which performance prediction is re- quired is often different from the testbed one on which the application performance can be measured, and is usually un- available for deployment and load testing of the application. In this paper, we present M3 , our Measure-Measure-Model method, which uses a pipeline of three tools to solve this problem. The tool-chain starts with AutoPerf, which mea- sures the CPU service demands of the application on the testbed. CloneGen then takes this and the number and size of network calls as input and generates a clone, whose CPU service demand matches the application’s. This clone is then deployed on the target, instead of the original application, since its code is simple, does not need a full database, and is thus easier to install. AutoPerf is used again to measure CPU service demand of the clone on the target, under light load generation. Finally, this service demand is fed into PerfCenter which is a multi-tier application performance modeling tool, which can then predict the application per- formance on the target under any workload. We validated the predictions made using the M3 tool-chain against direct measurement made on two applications - DellDVD and RU- BiS, on various combinations of testbed and target platforms (Intel and AMD servers) and found that in almost all cases, prediction error was less than 20%.