An empirical study on the performance overhead of code instrumentation in containerised microservices

IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Yasmeen Hammad , Amro Al-Said Ahmad , Peter Andras
{"title":"An empirical study on the performance overhead of code instrumentation in containerised microservices","authors":"Yasmeen Hammad ,&nbsp;Amro Al-Said Ahmad ,&nbsp;Peter Andras","doi":"10.1016/j.jss.2025.112573","DOIUrl":null,"url":null,"abstract":"<div><div>Code instrumentation is vital for analysing software behaviour and facilitating cloud computing observability and monitoring, especially in microservices and containers. Despite its benefits, instrumentation introduces complexity and performance overhead, which may inadvertently slow down systems and cause unexpected or erratic behaviour. In this study, we examine the effect of automated code instrumentation on the performance of containerised microservices by comparing instrumented systems against a baseline without instrumentation. Our experimental framework is based on key performance metrics, including response time, latency, throughput, and error percentage. It is executed using a rigorous methodology with a warm-up strategy to mitigate cold-start effects. Over 5000 experiments were conducted on 70 microservice APIs drawn from two open-source applications hosted on AWS and Azure to compare the results with baseline data. The experimental analysis comprises three stages: a pilot study on AWS, a case study on AWS and Azure, and an outlier analysis of the experimental results. Overall throughput decreased by up to 8.40 %, with some individual cases experiencing up to a 30 % reduction compared to the baseline, and response time and latency dropped by 20–49 %. Moreover, the results show more outlier cases in instrumentation results than in the baseline. Additionally, the results reveal more outlier cases in the instrumentation results compared to the baseline. The instrumentation has led to unexpected or erratic behaviour, as indicated by higher variations in response time, latency, and throughput values, along with increased error rates and occasional outlier values that were not observed in the non-instrumented run. This indicates that the performance differences we observed are attributable to overhead introduced by instrumentation, rather than inherent inefficiencies within the APIs themselves. Furthermore, statistical analysis utilised the Wilcoxon Signed-Rank test and mean ratios, with multiple approaches validating significant performance differences between instrumented and baseline conditions for both cloud services. A significance analysis using Cohen’s d indicates that the throughput and response time reductions in both platforms are not only statistically significant but also suggest considerable operational impact. These findings offer insights into automated code instrumentation's performance and impact on containerised microservices. It highlights the need to develop better and less impactful instrumentation techniques, and possibly towards the development of a new approach for large-scale software development and deployment in cloud environments that facilitates efficient instrumentation by design.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"230 ","pages":"Article 112573"},"PeriodicalIF":4.1000,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121225002420","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Code instrumentation is vital for analysing software behaviour and facilitating cloud computing observability and monitoring, especially in microservices and containers. Despite its benefits, instrumentation introduces complexity and performance overhead, which may inadvertently slow down systems and cause unexpected or erratic behaviour. In this study, we examine the effect of automated code instrumentation on the performance of containerised microservices by comparing instrumented systems against a baseline without instrumentation. Our experimental framework is based on key performance metrics, including response time, latency, throughput, and error percentage. It is executed using a rigorous methodology with a warm-up strategy to mitigate cold-start effects. Over 5000 experiments were conducted on 70 microservice APIs drawn from two open-source applications hosted on AWS and Azure to compare the results with baseline data. The experimental analysis comprises three stages: a pilot study on AWS, a case study on AWS and Azure, and an outlier analysis of the experimental results. Overall throughput decreased by up to 8.40 %, with some individual cases experiencing up to a 30 % reduction compared to the baseline, and response time and latency dropped by 20–49 %. Moreover, the results show more outlier cases in instrumentation results than in the baseline. Additionally, the results reveal more outlier cases in the instrumentation results compared to the baseline. The instrumentation has led to unexpected or erratic behaviour, as indicated by higher variations in response time, latency, and throughput values, along with increased error rates and occasional outlier values that were not observed in the non-instrumented run. This indicates that the performance differences we observed are attributable to overhead introduced by instrumentation, rather than inherent inefficiencies within the APIs themselves. Furthermore, statistical analysis utilised the Wilcoxon Signed-Rank test and mean ratios, with multiple approaches validating significant performance differences between instrumented and baseline conditions for both cloud services. A significance analysis using Cohen’s d indicates that the throughput and response time reductions in both platforms are not only statistically significant but also suggest considerable operational impact. These findings offer insights into automated code instrumentation's performance and impact on containerised microservices. It highlights the need to develop better and less impactful instrumentation techniques, and possibly towards the development of a new approach for large-scale software development and deployment in cloud environments that facilitates efficient instrumentation by design.
容器化微服务中代码插装性能开销的实证研究
代码插装对于分析软件行为和促进云计算的可观察性和监控至关重要,尤其是在微服务和容器中。尽管有好处,但插装带来了复杂性和性能开销,这可能会无意中减慢系统速度,并导致意外或不稳定的行为。在本研究中,我们通过比较仪表化系统与没有仪表化的基线,来检查自动化代码仪表化对容器化微服务性能的影响。我们的实验框架基于关键性能指标,包括响应时间、延迟、吞吐量和错误率。它使用严格的方法和预热策略来执行,以减轻冷启动的影响。我们对托管在AWS和Azure上的两个开源应用程序中的70个微服务api进行了5000多个实验,并将结果与基线数据进行了比较。实验分析包括三个阶段:AWS的试点研究、AWS和Azure的案例研究和实验结果的离群分析。总体吞吐量下降了8.40%,与基线相比,某些个别情况下的吞吐量下降了30%,响应时间和延迟下降了20 - 49%。此外,结果显示仪器测量结果中的异常情况比基线中的异常情况更多。此外,与基线相比,结果显示仪器结果中有更多的异常情况。检测会导致意外或不稳定的行为,这可以从响应时间、延迟和吞吐量值的较大变化,以及错误率的增加和在非检测运行中未观察到的偶尔异常值中看出。这表明我们观察到的性能差异是由插装带来的开销造成的,而不是api本身固有的低效率。此外,统计分析利用了Wilcoxon sign - rank检验和平均比率,采用多种方法验证了两种云服务的仪器条件和基线条件之间的显著性能差异。使用Cohen 's d进行的显著性分析表明,两个平台的吞吐量和响应时间的减少不仅在统计上显著,而且还表明了相当大的操作影响。这些发现为自动化代码检测的性能和对容器化微服务的影响提供了见解。它强调了开发更好的、影响较小的检测技术的必要性,并可能朝着为云环境中的大规模软件开发和部署开发一种新方法的方向发展,这种方法通过设计来促进高效的检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Systems and Software
Journal of Systems and Software 工程技术-计算机:理论方法
CiteScore
8.60
自引率
5.70%
发文量
193
审稿时长
16 weeks
期刊介绍: The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to: •Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution •Agile, model-driven, service-oriented, open source and global software development •Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems •Human factors and management concerns of software development •Data management and big data issues of software systems •Metrics and evaluation, data mining of software development resources •Business and economic aspects of software development processes The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信