{"title":"Performance Antipattern Detection through fUML Model Library","authors":"Davide Arcelli, L. Berardinelli, Catia Trubiani","doi":"10.1145/2693561.2693565","DOIUrl":"https://doi.org/10.1145/2693561.2693565","url":null,"abstract":"Identifying performance problems is critical in the software design, mostly because the results of performance analysis (i.e., mean values, variances, and probability distributions) are difficult to be interpreted for providing feedback to software designers. Performance antipatterns support the interpretation of performance analysis results and help to fill the gap between numbers and design alternatives.\u0000 In this paper, we present a model-driven framework that enables an early detection of performance antipatterns, i.e., without generating performance models. Specific design features (e.g., the number of sent messages) are monitored while simulating the specified software model, in order to point out the model elements that most likely contribute for performance flaws. To this end, we propose to use fUML models instrumented with a reusable library that provides data structures (as Classes) and algorithms (as Activities) to detect performance antipatterns while simulating the fUML model itself. A case study is provided to show our framework at work, its current capabilities and future challenges.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116230972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software Performance Engineering Then and Now: A Position Paper","authors":"C. U. Smith","doi":"10.1145/2693561.2693567","DOIUrl":"https://doi.org/10.1145/2693561.2693567","url":null,"abstract":"Software Performance Engineering (SPE) is about developing software systems that meet performance requirements. It is a proactive approach that uses quantitative techniques to predict the performance of software early in design to identify viable options and eliminate unsatisfactory ones before implementation begins. Despite its effectiveness, performance problems continue to occur. This position paper examines the evolution of SPE. It often helps to re-examine history to see if it yields insights into the future. It concludes with some thoughts about future directions.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121573734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Challenges in Integrating the Analysis of Multiple Non-Functional Properties in Model-Driven Software Engineering","authors":"D. Petriu","doi":"10.1145/2693561.2693566","DOIUrl":"https://doi.org/10.1145/2693561.2693566","url":null,"abstract":"This vision paper discusses the challenges of integrating the analysis of multiple Non-Functional Properties (NFP) in the model-driven software engineering process, where formal analysis models are generated by model transformations from annotated software models. The paper proposes an integration approach based on an ecosystem of inter-related heterogeneous modeling artifacts intended to support consistent co-evolution of the software and analysis models, cross-model traceability, incremental propagation of changes across models and (semi)automated software process steps. Another goal is to investigate new metaheuristics approaches for reducing the size of the design space to be explored in the search for a design solution that will meet all the non-functional requirements.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121418196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Henia, L. Rioux, Nicolas Sordon, G. Garcia, Marco Panunzio
{"title":"Integrating Formal Timing Analysis in the Real-Time Software Development Process","authors":"R. Henia, L. Rioux, Nicolas Sordon, G. Garcia, Marco Panunzio","doi":"10.1145/2693561.2693562","DOIUrl":"https://doi.org/10.1145/2693561.2693562","url":null,"abstract":"When designing complex real-time software, it is very difficult to predict how design decisions may impact the system timing behavior. Usually, the industrial practices rely on the subjective judgment of experienced software architects and developers. This is however risky since eventual timing errors are only detected after implementation and integration, when the software execution can be tested on system level, under realistic conditions. At this stage, timing errors may be very costly and time consuming to correct. Therefore, to overcome this problem we need an efficient, reliable and automated timing estimation method applicable already at early design stages and continuing throughout the whole development cycle. Formal timing analysis appears at first sight to be the adequate candidate for this purpose. However, its use in the industry is conditioned by a smooth and seamless integration in the software development process. This is not an easy task due to the semantic mismatches between the design and analysis models but also due to the missing link between the analysis and the testing phase after code implementation. In this paper, we present a timing analysis framework we developed in the context of the industrial design of satellite on-board software, allowing an early integration and full automation of formal timing verification activities in the development process of real-time embedded software, as a mean to decrease the design time and reduce the risks of costly timing failures.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117345485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Autoperf: Workflow Support for Performance Experiments","authors":"Xiaoguang Dai, B. Norris, A. Malony","doi":"10.1145/2693561.2693569","DOIUrl":"https://doi.org/10.1145/2693561.2693569","url":null,"abstract":"Many excellent open-source and commercial tools enable the detailed measurement of the performance attributes of applications. However, the process of collecting measurement data and analyzing it remains effort-intensive because of differences in tool interfaces and architectures. Furthermore, insufficient standards and automation may result in losing information about experiments, which may in turn lead to misinterpretation of the data and analysis results. Autoperf aims to support the entire workflow in performance measurement and analysis in a uniform and portable fashion, enabling both better productivity through automation of data collection and analysis and experiment reproducibility.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121649187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Runtime Performance Challenges in Big Data Systems","authors":"John Klein, I. Gorton","doi":"10.1145/2693561.2693563","DOIUrl":"https://doi.org/10.1145/2693561.2693563","url":null,"abstract":"Big data systems are becoming pervasive. They are distributed systems that include redundant processing nodes, replicated storage, and frequently execute on a shared 'cloud' infrastructure. For these systems, design-time predictions are insufficient to assure runtime performance in production. This is due to the scale of the deployed system, the continually evolving workloads, and the unpredictable quality of service of the shared infrastructure. Consequently, a solution for addressing performance requirements needs sophisticated runtime observability and measurement. Observability gives real-time insights into a system's health and status, both at the system and application level, and provides historical data repositories for forensic analysis, capacity planning, and predictive analytics. Due to the scale and heterogeneity of big data systems, significant challenges exist in the design, customization and operations of observability capabilities. These challenges include economical creation and insertion of monitors into hundreds or thousands of computation and data nodes, efficient, low overhead collection and storage of measurements (which is itself a big data problem), and application-aware aggregation and visualization. In this paper we propose a reference architecture to address these challenges, which uses a model-driven engineering toolkit to generate architecture-aware monitors and application-specific visualizations.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"316 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116597131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a DevOps Approach for Software Quality Engineering","authors":"Juan F. Pérez, Weikun Wang, G. Casale","doi":"10.1145/2693561.2693564","DOIUrl":"https://doi.org/10.1145/2693561.2693564","url":null,"abstract":"DevOps is a novel trend in software engineering that aims at bridging the gap between development and operations, putting in particular the developer in greater control of deployment and application runtime. Here we consider the problem of designing a tool capable of providing feedback to the developer on the performance, reliability, and in general quality characteristics of the application at runtime. This raises a number of questions related to what measurement information should be carried back from runtime to design-time and what degrees of freedom should be provided to the developer in the evaluation of performance data. To answer these questions, we describe the design of a filling-the-gap (FG) tool, a software system capable of automatically analyzing performance data either directly or through statistical inference. A natural application of the FG tool is the continuous training of stochastic performance models, such as layered queueing networks, that can inform developers on how to refactor the software architecture.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124648382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond Simulation: Composing Scalability, Elasticity, and Efficiency Analyses from Preexisting Analysis Results","authors":"Sebastian Lehrig, Steffen Becker","doi":"10.1145/2693561.2693568","DOIUrl":"https://doi.org/10.1145/2693561.2693568","url":null,"abstract":"In cloud computing, typical requirements of Software-as-a-Service (SaaS) applications target scalability, elasticity, and efficiency. To analyze such properties, software engineers need efficient specifications that acknowledge for uncertainties within the underlying cloud computing environment. However, existing analysis specifications are based on simulating the system as a whole, which is inefficient and requires full knowledge of the underlying environment.\u0000 To cope with this problem, we envision to structure systems in independent operations, each annotated with novel scalability, elasticity, and efficiency attributes from preexisting analyses, e.g., conducted by engineers that had sufficient knowledge of the environment. Such attributes enable highly efficient compositional analyses of the system as a whole. In this vision paper, we describe our initial ideas for our new composition approach based on a simple running example.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114887502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptivity metric and performance for restart strategies in web services reliable messaging","authors":"P. Reinecke, K. Wolter","doi":"10.1145/1383559.1383585","DOIUrl":"https://doi.org/10.1145/1383559.1383585","url":null,"abstract":"Adaptivity, the ability of a system to adapt itself to its environment, is a key property of autonomous systems. In his paper we propose a benefit-based framework for the efinition of metrics to measure adaptivity. We demonstrate the application of the framework in a case study of the adaptivity of restart strategies for Web Services Reliable Messaging (WSRM). Using the framework, we define two adaptivity metrics for a fault-injection-driven evaluation of the adaptivity of three restart strategies in aWSRM implementation. The adaptivity measurements are complemented by a thorough discussion of the performance of the restart strategies.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115681570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coupled model transformations","authors":"Steffen Becker","doi":"10.1145/1383559.1383573","DOIUrl":"https://doi.org/10.1145/1383559.1383573","url":null,"abstract":"Model-driven performance prediction methods use abstract design models to predict the performance of the modelled system during early development stages. However, performance is an attribute of the running system and not its model. The system contains many implementation details not part of its model but still affecting the performance at run-time. Existing approaches neglect details of the implementation due to the abstraction underlying the design model. Completion components [26] deal with this problem, however, they have to be added manually to the prediction model. In this work, we assume that the system's implementation is generated by a chain of model transformations. In this scenario, the transformation rules determine the transformation result. By analysing these transformation rules, a second transformation can be derived which automatically adds details to the prediction model according to the encoded rules. We call this transformation a coupled transformation as it is coupled to an corresponding model-to-code transformation. It uses the knowledge on the output of the model-to-code transformation to increase performance prediction accuracy. The introduced coupled transformations method is validated in a case study in which a parametrised transformation maps abstract component connectors to realisations in different RPC calls. In this study, the corresponding coupled transformation captures the RPC's details with a prediction error of less than 5%.","PeriodicalId":235512,"journal":{"name":"Workshop on Software and Performance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129813298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}