Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021最新文献

筛选
英文 中文
Is Function-as-a-Service a Good Fit for Latency-Critical Services? 功能即服务是否适合延迟关键型服务?
Haoran Qiu, Saurabh Jha, Subho Sankar Banerjee, Archit Patke, Chen Wang, H. Franke, Z. Kalbarczyk, R. Iyer
{"title":"Is Function-as-a-Service a Good Fit for Latency-Critical Services?","authors":"Haoran Qiu, Saurabh Jha, Subho Sankar Banerjee, Archit Patke, Chen Wang, H. Franke, Z. Kalbarczyk, R. Iyer","doi":"10.1145/3493651.3493666","DOIUrl":"https://doi.org/10.1145/3493651.3493666","url":null,"abstract":"Function-as-a-Service (FaaS) is becoming an increasingly popular cloud-deployment paradigm for serverless computing that frees application developers from managing the infrastructure. At the same time, it allows cloud providers to assert control in workload consolidation, i.e., co-locating multiple containers on the same server, thereby achieving higher server utilization, often at the cost of higher end-to-end function request latency. Interestingly, a key aspect of serverless latency management has not been well studied: the trade-off between application developers' latency goals and the FaaS providers' utilization goals. This paper presents a multi-faceted, measurement-driven study of latency variation in serverless platforms that elucidates this trade-off space. We obtained production measurements by executing FaaS benchmarks on IBM Cloud and a private cloud to study the impact of workload consolidation, queuing delay, and cold starts on the end-to-end function request latency. We draw several conclusions from the characterization results. For example, increasing a container's allocated memory limit from 128 MB to 256 MB reduces the tail latency by 2× but has 1.75× higher power consumption and 59% lower CPU utilization.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126327328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
SLA for Sequential Serverless Chains: A Machine Learning Approach 顺序无服务器链的SLA:一种机器学习方法
Mohamed Elsakhawy, M. Bauer
{"title":"SLA for Sequential Serverless Chains: A Machine Learning Approach","authors":"Mohamed Elsakhawy, M. Bauer","doi":"10.1145/3493651.3493671","DOIUrl":"https://doi.org/10.1145/3493651.3493671","url":null,"abstract":"Despite its vast potential, a challenge facing serverless computing's wide-scale adoption is the lack of Service Level Agreements (SLAs) for serverless platforms. This challenge is compounded when composition technologies are employed to construct large applications using chains of functions. Due to the dependency of a chain's performance on each function forming it, a single function's sub-optimal performance can result in performance degradations of the entire chain. This paper sheds light on this problem and provides a categorical classification of the factors that impact a serverless function execution performance. We discuss the challenge of serverless chains' SLA and present the results of leveraging FaaS2F, our proposed serverless SLA framework, to define SLAs for fixed-size and variable-size sequential serverless chains. The validation results demonstrate high accuracy in detecting sub-optimal executions exceeding 79%.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122641698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implications of Alternative Serverless Application Control Flow Methods 备选无服务器应用程序控制流方法的含义
Sterling D. Quinn, R. Cordingly, W. Lloyd
{"title":"Implications of Alternative Serverless Application Control Flow Methods","authors":"Sterling D. Quinn, R. Cordingly, W. Lloyd","doi":"10.1145/3493651.3493668","DOIUrl":"https://doi.org/10.1145/3493651.3493668","url":null,"abstract":"Function-as-a-Service or FaaS is a popular delivery model of serverless computing where developers upload code to be executed in the cloud as short running stateless functions. Using smaller functions to decompose processing of larger tasks or workflows introduces the question of how to instrument application control flow to orchestrate an overall task or workflow. In this paper, we examine implications of using different methods to orchestrate the control flow of a serverless data processing pipeline composed as a set of independent FaaS functions. We performed experiments on the AWS Lambda FaaS platform and compared how four different patterns of control flow impact the cost and performance of the pipeline. We investigate control flow using client orchestration, microservice controllers, event-based triggers, and state-machines. Overall, we found that asynchronous methods led to lower orchestration costs, and that event-based orchestration incurred a performance penalty.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133134132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Beyond @CloudFunction: Powerful Code Annotations to Capture Serverless Runtime Patterns 超越@CloudFunction:捕获无服务器运行时模式的强大代码注释
Raffael Klingler, N. Trifunovic, Josef Spillner
{"title":"Beyond @CloudFunction: Powerful Code Annotations to Capture Serverless Runtime Patterns","authors":"Raffael Klingler, N. Trifunovic, Josef Spillner","doi":"10.1145/3493651.3493669","DOIUrl":"https://doi.org/10.1145/3493651.3493669","url":null,"abstract":"Simplicity in elastically scalable application development is a key concern addressed by the serverless computing paradigm, in particular the code-level Function-as-a-Service (FaaS). Various FaaSification frameworks demonstrated that marking code methods to streamline their offloading as cloud functions offers a simple bridge to software engineering habits. As application complexity increases, more complex runtime patterns with background activities, such as keeping containerised cloud functions warm to ensure the absence of cold starts, usually require giving up on simplicity and instead investing efforts into orchestrating infrastructure. By bringing infrastructure-as-code concepts into the function source via powerful code annotations, typical orchestration patterns can be simplified again. We evaluate this idea and demonstrate its practical feasibility with FaaS Fusion, an annotations library and transpiler framework for JavaScript.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125678642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BIAS Autoscaler: Leveraging Burstable Instances for Cost-Effective Autoscaling on Cloud Systems BIAS Autoscaler:利用突发实例在云系统上实现经济高效的自动缩放
Jaime Dantas, Hamzeh Khazaei, Marin Litoiu
{"title":"BIAS Autoscaler: Leveraging Burstable Instances for Cost-Effective Autoscaling on Cloud Systems","authors":"Jaime Dantas, Hamzeh Khazaei, Marin Litoiu","doi":"10.1145/3493651.3493667","DOIUrl":"https://doi.org/10.1145/3493651.3493667","url":null,"abstract":"Burstable instances have recently been introduced by cloud providers as a cost-efficient alternative to customers that do not require powerful machines for running their workloads. Unlike conventional instances, the CPU capacity of burstable instances is rate limited, but they can be boosted to their full capacity for small periods when needed. Currently, the majority of cloud providers offer this option as a cheaper solution for their clients. However, little research has been done on the practical usage of these CPU-limited instances. In this paper, we present a novel autoscaling solution that uses burstable instances along with regular instances to handle the queueing arising in traffic and flash crowds. We design BIAS Autoscaler, a state-of-the-art framework that leverages burstable and regular instances for cost-efficient autoscaling and evaluate it on the Google Cloud Platform. We apply our framework to a real-world microservice workload, and conduct extensive experimental evaluations using Google Compute Engines. Experimental results show that BIAS Autoscaler can reduce the overall cost up to 25% and increase resource efficiency by 42% while maintaining the same service quality observed when using conventional instances only.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122541605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SFL: A Compiler for Generating Stateful AWS Lambda Serverless Applications 生成有状态AWS Lambda无服务器应用程序的编译器
Lukas Brand, Markus U. Mock
{"title":"SFL: A Compiler for Generating Stateful AWS Lambda Serverless Applications","authors":"Lukas Brand, Markus U. Mock","doi":"10.1145/3493651.3493670","DOIUrl":"https://doi.org/10.1145/3493651.3493670","url":null,"abstract":"Over the past couple of years, serverless computing has become a popular way of structuring and deploying applications in the cloud. However, several practical and research challenges remain. In this paper, we provide the first step to address two open issues. We developed a simple extension language (SFL) and a compiler to enable software developers to write entire serverless applications as one piece. The compiler generates necessary orchestration code that automatically binds several functions together. In addition, the SFL tools allow programmers to write stateful serverless functions with the compiler generating supporting cloud infrastructure for the storage and access of the application state. We evaluate our system using simple benchmark programs, comparing the resulting performance to Azure durable functions, which directly supports statefulness. The execution times we see in our unoptimized code are only slightly worse than what we measure on the Azure platform. Overall execution times are considerably better due to better scheduling by AWS Lambda than the Azure durable functions.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116445766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Demystifying Intra-Function Parallelism in Serverless Computing 在无服务器计算中揭开函数内并行的神秘面纱
M. Kiener, Mohak Chadha, M. Gerndt
{"title":"Towards Demystifying Intra-Function Parallelism in Serverless Computing","authors":"M. Kiener, Mohak Chadha, M. Gerndt","doi":"10.1145/3493651.3493672","DOIUrl":"https://doi.org/10.1145/3493651.3493672","url":null,"abstract":"Serverless computing offers a pay-per-use model with high elasticity and automatic scaling for a wide range of applications. Since cloud providers abstract most of the underlying infrastructure, these services work similarly to black-boxes. As a result, users can influence the resources allocated to their functions, but might not be aware that they have to parallelize them to profit from the additionally allocated virtual CPUs (vCPUs). In this paper, we analyze the impact of parallelization within a single function and container instance for AWS Lambda, Google Cloud Functions (GCF), and Google Cloud Run (GCR). We focus on compute-intensive workloads since they benefit greatly from parallelization. Furthermore, we investigate the correlation between the number of allocated CPU cores and vCPUs in serverless environments. Our results show that the number of available cores to a function/container instance does not always equal the number of allocated vCPUs. By parallelizing serverless workloads, we observed cost savings up to 81% for AWS Lambda, 49% for GCF, and 69.8% for GCR.","PeriodicalId":270470,"journal":{"name":"Proceedings of the Seventh International Workshop on Serverless Computing (WoSC7) 2021","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117091280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信