Maximilian Meissner, Supriya Kamthania, Nishant Rawtani, James Bucek, K. Lange, Samuel Kounev
{"title":"Experience and Guidelines for Sorting Algorithm Choices and Their Energy Efficiency","authors":"Maximilian Meissner, Supriya Kamthania, Nishant Rawtani, James Bucek, K. Lange, Samuel Kounev","doi":"10.1145/3491204.3527468","DOIUrl":"https://doi.org/10.1145/3491204.3527468","url":null,"abstract":"Energy efficiency has become a major concern in the IT sector as the energy demand for data centers is projected to reach 1PWh per year by 2030. While hardware designers improve the energy efficiency of their products, software developers often do not consider or are unaware of the impact their design choices can make on the energy consumption caused by the execution of their applications. Energy efficiency improvements in applications can, to a certain extent, be achieved through compiler optimizations. Nonetheless, software developers should still make reasonable design choices to improve energy efficiency further. In this paper, we present the energy efficiency of common sorting algorithms under different pre-sorted conditions. Previous work in this field considered only randomized data. We expand on this previous work and measure the sorting algorithms' energy efficiency when the data is already partially sorted to 20% and 50%. Our presented experience is a case study intended to demonstrate the effect simple design choices, such as the selection of algorithm as well as its implementation, can make on energy efficiency. It is intended for industry practitioners to aid them in selecting a more energy-efficient algorithm for their problems at hand through helpful guidelines. Our results also can function as an incentive to make energy efficiency a non-functional requirement for tenders, and as a motivation for researchers to include energy efficiency as an additional quality criterion when studying the properties of algorithms.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127035469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gagan Somashekar, Anurag Dutt, R. Vaddavalli, Sai Bhargav Varanasi, Anshul Gandhi
{"title":"B-MEG: Bottlenecked-Microservices Extraction Using Graph Neural Networks","authors":"Gagan Somashekar, Anurag Dutt, R. Vaddavalli, Sai Bhargav Varanasi, Anshul Gandhi","doi":"10.1145/3491204.3527494","DOIUrl":"https://doi.org/10.1145/3491204.3527494","url":null,"abstract":"The microservices architecture enables independent development and maintenance of application components through its fine-grained and modular design. This has enabled rapid adoption of microservices architecture to build latency-sensitive online applications. In such online applications, it is critical to detect and mitigate sources of performance degradation (bottlenecks). However, the modular design of microservices architecture leads to a large graph of interacting microservices whose influence on each other is non-trivial. In this preliminary work, we explore the effectiveness of Graph Neural Network models in detecting bottlenecks. Preliminary analysis shows that our framework, B-MEG, produces promising results, especially for applications with complex call graphs. B-MEG shows up to 15% and 14% improvements in accuracy and precision, respectively, and close to 10× increase in recall for detecting bottlenecks compared to the technique used in existing work for bottleneck detection in microservices.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122139302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmarking Runtime Scripting Performance in Wasmer","authors":"Devon Hockley, C. Williamson","doi":"10.1145/3491204.3527477","DOIUrl":"https://doi.org/10.1145/3491204.3527477","url":null,"abstract":"In this paper, we explore the use of Wasmer and WebAssembly (WASM) as a sandboxed environment for general-purpose runtime scripting. Our work differs from prior research focusing on browser-based performance or SPEC benchmarks. In particular, we use micro-benchmarks and a macro-benchmark (both written in Rust) to compare execution times between WASM and native mode. We first measure which elements of script execution have the largest performance impact, using simple micro-benchmarks. Then we consider a Web proxy caching simulator, with different cache replacement policies, as a macro-benchmark. Using this simulator, we demonstrate a 5-10x performance penalty for WASM compared to native execution.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129787597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximilian Meissner, K. Lange, J. Arnold, Sanjay Sharma, Roger Tipley, Nishant Rawtani, D. Reiner, Mike Petrich, Aaron Cragin
{"title":"SPEC Efficiency Benchmark Development: How to Contribute to the Future of Energy Conservation","authors":"Maximilian Meissner, K. Lange, J. Arnold, Sanjay Sharma, Roger Tipley, Nishant Rawtani, D. Reiner, Mike Petrich, Aaron Cragin","doi":"10.1145/3491204.3527492","DOIUrl":"https://doi.org/10.1145/3491204.3527492","url":null,"abstract":"A driving force behind the improvement of server efficiency in recent years is the use of SPEC benchmarks. They are used in mandatory government regulations, the ISO/IEC 21836:2020 standard, and product marketing, giving server manufacturers and buyers significant incentive to improve energy efficiency. To produce relevant results, benchmarks need to take into account future trends in hardware and software development, such as the introduction of new accelerators and workloads. To keep pace with the development of the fast moving IT landscape, SPEC plans to introduce a workload bounty program to encourage researchers to develop novel workloads. Submitted workloads will be considered for inclusion in future SPEC Efficiency benchmarks and rewarded. In this paper, we outline the process of energy-efficiency benchmark development. SPEC ensures the development of high-quality benchmarks for government regulations through its extensive experience and collaboration with stakeholders from industry, academia, and governments. One of the tools that emerged from this process is the Chauffeur Worklet Development Kit (WDK), which can be used by researchers to develop next-generation workloads to enhance the real-world relevance of future SPEC benchmarks, a critical element for the benchmarks to contribute to future energy conservation.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114181759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Kousiouris, Chris Giannakos, K. Tserpes, Teta Stamati
{"title":"Measuring Baseline Overheads in Different Orchestration Mechanisms for Large FaaS Workflows","authors":"George Kousiouris, Chris Giannakos, K. Tserpes, Teta Stamati","doi":"10.1145/3491204.3527467","DOIUrl":"https://doi.org/10.1145/3491204.3527467","url":null,"abstract":"Serverless environments have attracted significant attention in recent years as a result of their agility in execution as well as inherent scaling capabilities as a cloud-native execution model. While extensive analysis has been performed in various critical performance aspects of these environments, such as cold start times, the aspect of workflow orchestration delays has been neglected. Given that this paradigm has become more mature in recent years and application complexity has started to rise from a few functions to more complex application structures, the issue of delays in orchestrating these functions may become severe. In this work, one of the main open source FaaS platforms, Openwhisk, is utilized in order to measure and investigate its orchestration delays for the main sequence operator of the platform. These are compared to delays included in orchestration of functions through two alternative means, including the execution of orchestrator logic functions in supporting runtimes based on Node-RED. The delays inserted by each different orchestration mode are measured and modeled, while boundary points of selection between each mode are presented, based on the number and expected delay of the functions that constitute the workflow. It is indicative that in certain cases, the orchestration overheads might range from 0.29% to 235% compared to the beneficial computational time needed for the workflow functions. The results can extend simulation and estimation mechanisms with information on the orchestration overheads.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116408845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TaskFlow","authors":"L. Versluis, A. Iosup","doi":"10.1145/3491204.3527466","DOIUrl":"https://doi.org/10.1145/3491204.3527466","url":null,"abstract":"Datacenters need to become more power efficient for political and climate reasons. In this work, we introduce an idea for the community to further explore. We embed the idea in TaskFlow: a makespan conservative, energy-aware task placement policy for workflow scheduling. Using static, rough numbers and simulation, we obtain energy savings between [4.24, 47.00]% and [0.1, 13.6]%, respectively. We also present some pitfalls that should be investigated further, notably starvation of large tasks when using TaskFlow.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"313 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123223337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FaaSET","authors":"R. Cordingly, W. Lloyd","doi":"10.1145/3491204.3527464","DOIUrl":"https://doi.org/10.1145/3491204.3527464","url":null,"abstract":"Function-as-a-Service platforms require developers to use many different tools and services for function development, packaging, deployment, debugging, testing, orchestration of experiments, and analysis of results. Diverse toolchains are necessary due to the differences in how each platform is designed, the technologies they support, and the APIs they provide, leading to usability challenges for developers. To combine support for all of the tasks and tools into a unified workspace, we created the FaaS Experiment Toolkit (FaaSET). At the core of FaaSET is a Jupyter notebook development environment that enables developers to write functions, deploy them across multiple platforms, invoke and test them, automate experiments, and perform data analysis all in a single environment.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122698720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Bauer, Martin Straesser, Lukas Beierlieb, Maximilian Meissner, Samuel Kounev
{"title":"Automated Triage of Performance Change Points Using Time Series Analysis and Machine Learning: Data Challenge Paper","authors":"A. Bauer, Martin Straesser, Lukas Beierlieb, Maximilian Meissner, Samuel Kounev","doi":"10.1145/3491204.3527486","DOIUrl":"https://doi.org/10.1145/3491204.3527486","url":null,"abstract":"Performance regression testing is a foundation of modern DevOps processes and pipelines. Thus, the detection of change points, i.e., updates or commits that cause a significant change in the performance of the software, is of special importance. Typically, validating potential change points relies on humans, which is a considerable bottleneck and costs time and effort. This work proposes a solution to classify and detect change points automatically. On the performance test data set provided by MongoDB, our approach classifies potential change points with an AUC of 95.8% and accuracy of 94.3%, whereas the detection and classification of change points based on previous and the current commits exhibits an AUC of 92.0% and accuracy of 84.3%. In both cases, our approach can save time-consuming and costly human work.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132040422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design-time Performability Optimization of Runtime Adaptation Strategies","authors":"Martina Rapp, Max Scheerer, Ralf H. Reussner","doi":"10.1145/3491204.3527471","DOIUrl":"https://doi.org/10.1145/3491204.3527471","url":null,"abstract":"Self-Adaptive Systems (SASs) adapt themselves to environmental changes during runtime to maintain Quality of Service (QoS) goals. Designing and optimizing the adaptation strategy of an SAS regarding its impact on quality properties is a challenging problem. Usually the design space of adaptation strategies is too large to be explored manually and, hence, requires automated support to find optimal strategies. Most approaches address this problem with optimization at runtime requiring the system is already implemented. However, one expects design-time optimized adaptation strategies to more effectively maintain QoS goals than purely runtime optimized strategies. Also formal guarantees benefit from designed and analysed strategies. We claim that design-time analysis and optimization of adaptation strategies improve in particular quality properties such as performability. To address the research gap between runtime optimization and the ability to make statements on the achieved quality, we envision an approach that builds upon the concept of Model-Based Quality Analysis (MBQA). Many approaches in MBQA address single aspects such as formal languages for adaptation strategies, architectural description languages or QoS prediction. However, they lack integration, which leads, for example to prediction approaches assuming rather static systems. In this paper, we envision an unified approach by considering several sub-approaches as building blocks for performability-based optimization of adaptation strategies at design-time.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129736689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Beck, Sebastian Frank, Alireza Hakamian, André van Hoorn
{"title":"How is Transient Behavior Addressed in Practice?: Insights from a Series of Expert Interviews","authors":"S. Beck, Sebastian Frank, Alireza Hakamian, André van Hoorn","doi":"10.1145/3491204.3527483","DOIUrl":"https://doi.org/10.1145/3491204.3527483","url":null,"abstract":"Transient behavior occurs when a running software system changes from one steady-state to another. In microservice systems, such disruptions can, for example, be caused by continuous deployment, self-adaptation, and various failures. Although transient behavior could be captured in non-functional requirements, little is known of how that is handled in practice. Our objective was to study how architects and engineers approach runtime disruptions, which challenges they face, whether or not they specify transient behavior, and how currently employed tools and methods can be improved. To this end, we conducted semi-structured interviews with five experienced practitioners from major companies in Germany. We found that a big challenge in the industry is a lack of awareness of transient behavior by software stakeholders. Consequently, they often do not consider specifying it in non-functional requirements. Additionally, better tooling is needed to reduce the effort of analyzing transient behavior. We present two prototypes that we developed corresponding to these findings to improve the current situation. Beyond that, the insights we present can serve as pointers for interesting research directions for other researchers.","PeriodicalId":129216,"journal":{"name":"Companion of the 2022 ACM/SPEC International Conference on Performance Engineering","volume":"22 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133170035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}