Ada Casanovas, Javier Alonso, J. Torres, A. Andrzejak
{"title":"正在进行的工作:为服务器性能和行为分析构建一个分布式通用压力工具","authors":"Ada Casanovas, Javier Alonso, J. Torres, A. Andrzejak","doi":"10.1109/ICAS.2009.53","DOIUrl":null,"url":null,"abstract":"One of the primary tools for performance analysis of multi-tier systems are standardized benchmarks. They are used to evaluate system behavior under different circumstances to assess whether a system can handle real workloads in a production environment. Such benchmarks are also helpful to resolve situations when a system has an unacceptable performance or even crashes. System administrators and developers use these tools for reproducing and analyzing circumstances which provoke the errors or performance degradation. However, standardized benchmarks are usually constrained to simulating a set of pre-fixed workload distributions. We present a benchmarking framework which overcomes this limitation by generating real workloads from pre-recorded system traces. This distributed tool allows more realistic testing scenarios, and thus exposes the behavior and limits of a tested system with more details. Further advantage of our framework is its flexibility. For example, it can be used to extend standardized benchmarks like TPC-W thus allowing them to incorporate workload distributions derived from real workloads.","PeriodicalId":258907,"journal":{"name":"2009 Fifth International Conference on Autonomic and Autonomous Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Work in Progress: Building a Distributed Generic Stress Tool for Server Performance and Behavior Analysis\",\"authors\":\"Ada Casanovas, Javier Alonso, J. Torres, A. Andrzejak\",\"doi\":\"10.1109/ICAS.2009.53\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of the primary tools for performance analysis of multi-tier systems are standardized benchmarks. They are used to evaluate system behavior under different circumstances to assess whether a system can handle real workloads in a production environment. Such benchmarks are also helpful to resolve situations when a system has an unacceptable performance or even crashes. System administrators and developers use these tools for reproducing and analyzing circumstances which provoke the errors or performance degradation. However, standardized benchmarks are usually constrained to simulating a set of pre-fixed workload distributions. We present a benchmarking framework which overcomes this limitation by generating real workloads from pre-recorded system traces. This distributed tool allows more realistic testing scenarios, and thus exposes the behavior and limits of a tested system with more details. Further advantage of our framework is its flexibility. For example, it can be used to extend standardized benchmarks like TPC-W thus allowing them to incorporate workload distributions derived from real workloads.\",\"PeriodicalId\":258907,\"journal\":{\"name\":\"2009 Fifth International Conference on Autonomic and Autonomous Systems\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-04-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 Fifth International Conference on Autonomic and Autonomous Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAS.2009.53\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 Fifth International Conference on Autonomic and Autonomous Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAS.2009.53","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Work in Progress: Building a Distributed Generic Stress Tool for Server Performance and Behavior Analysis
One of the primary tools for performance analysis of multi-tier systems are standardized benchmarks. They are used to evaluate system behavior under different circumstances to assess whether a system can handle real workloads in a production environment. Such benchmarks are also helpful to resolve situations when a system has an unacceptable performance or even crashes. System administrators and developers use these tools for reproducing and analyzing circumstances which provoke the errors or performance degradation. However, standardized benchmarks are usually constrained to simulating a set of pre-fixed workload distributions. We present a benchmarking framework which overcomes this limitation by generating real workloads from pre-recorded system traces. This distributed tool allows more realistic testing scenarios, and thus exposes the behavior and limits of a tested system with more details. Further advantage of our framework is its flexibility. For example, it can be used to extend standardized benchmarks like TPC-W thus allowing them to incorporate workload distributions derived from real workloads.