{"title":"Service Demand Modeling and Prediction with Single-user Performance Tests","authors":"A. Kattepur, M. Nambiar","doi":"10.1145/2998476.2998483","DOIUrl":null,"url":null,"abstract":"Performance load tests of online transaction processing (OLTP) applications are expensive in terms of manpower, time and costs. Alternative performance modeling and prediction tools are required to generate accurate outputs with minimal input sample points. Service Demands (time needed to serve 1 request at queuing stations) are typically needed as inputs by most performance models. However, as service demands vary as a function of workload, models that input singular service demands produce erroneous predictions. The alternative, which is to collect service demands at varying workloads, require time and resource intensive load tests to estimate multiple sample points -- this defeats the purpose of performance modeling for industrial use. In this paper, we propose a service demand model as a function of concurrency that can be estimated with a single-user performance test. Further, we analyze multiple CPU performance metrics (cache hits/misses, branch prediction, context switches and so on) using Principal Component Analysis (PCA) to extract a regression function of service demand with increasing workloads. We use the service demand models as input to performance prediction algorithms such as Mean Value Analysis (MVA), to accurately predict throughput at varying workloads. This service demand prediction model uses CPU hardware counters, which is used in conjunction with a modified version of MVA with single-user service demand inputs. The predicted throughput values are within 9% deviation with measurements procured for a variety of application/hardware configurations. Such a service demand model is a step towards reducing reliance on conventional load testing for performance assurance.","PeriodicalId":171399,"journal":{"name":"Proceedings of the 9th Annual ACM India Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th Annual ACM India Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2998476.2998483","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Performance load tests of online transaction processing (OLTP) applications are expensive in terms of manpower, time and costs. Alternative performance modeling and prediction tools are required to generate accurate outputs with minimal input sample points. Service Demands (time needed to serve 1 request at queuing stations) are typically needed as inputs by most performance models. However, as service demands vary as a function of workload, models that input singular service demands produce erroneous predictions. The alternative, which is to collect service demands at varying workloads, require time and resource intensive load tests to estimate multiple sample points -- this defeats the purpose of performance modeling for industrial use. In this paper, we propose a service demand model as a function of concurrency that can be estimated with a single-user performance test. Further, we analyze multiple CPU performance metrics (cache hits/misses, branch prediction, context switches and so on) using Principal Component Analysis (PCA) to extract a regression function of service demand with increasing workloads. We use the service demand models as input to performance prediction algorithms such as Mean Value Analysis (MVA), to accurately predict throughput at varying workloads. This service demand prediction model uses CPU hardware counters, which is used in conjunction with a modified version of MVA with single-user service demand inputs. The predicted throughput values are within 9% deviation with measurements procured for a variety of application/hardware configurations. Such a service demand model is a step towards reducing reliance on conventional load testing for performance assurance.
联机事务处理(OLTP)应用程序的性能负载测试在人力、时间和成本方面都很昂贵。需要其他性能建模和预测工具,以最小的输入样本点生成准确的输出。服务需求(在排队站处理一个请求所需的时间)通常需要作为大多数性能模型的输入。然而,由于服务需求随着工作负载的变化而变化,输入单一服务需求的模型会产生错误的预测。另一种方法是收集不同工作负载下的服务需求,这需要时间和资源密集的负载测试来估计多个样本点——这违背了用于工业用途的性能建模的目的。在本文中,我们提出了一个服务需求模型,作为并发性的函数,可以通过单用户性能测试来估计。此外,我们使用主成分分析(PCA)分析多个CPU性能指标(缓存命中/未命中、分支预测、上下文切换等),以提取服务需求随工作负载增加的回归函数。我们使用服务需求模型作为性能预测算法(如均值分析(Mean Value Analysis, MVA))的输入,以准确预测不同工作负载下的吞吐量。此服务需求预测模型使用CPU硬件计数器,它与具有单用户服务需求输入的修改版本的MVA一起使用。预测的吞吐量值与针对各种应用程序/硬件配置获得的测量值的偏差在9%以内。这样的服务需求模型是朝着减少对性能保证的传统负载测试的依赖迈出的一步。