{"title":"功能即服务的模拟和基准测试管道","authors":"Johannes Manner","doi":"10.20378/irb-52238","DOIUrl":null,"url":null,"abstract":"Cloud computing started with the promise of delivering computing resources elastically at scale, pay per use and on demand self-service to name a few capabilities. In early 2016, Amazon Web Services (AWS) launched a new product called AWS Lambda which started the so called serverless hype and established a new cloud delivery model, namely Function as a Service (FaaS). FaaS offerings keep the promise of delivering computing resources on demand. They dynamically scale up and down function instances and introduce the most fine-grained billing model across all as-a-service offerings by accounting on a milliseconds basis. Despite this flexibility and the possibility to concentrate on the business functionality, a FaaS user loses operational control. Only a few configuration options remain to tune the functions. The first pay-as-you-go billing model raises new questions for performance-cost trade-offs. In order to choose a suitable configuration dependent on the use case and get a solid understanding of performance impact of FaaS platforms, SeMoDe implements a benchmarking and simulation pipeline. It calibrates a physical developer machine, simulates the function in different settings which are comparable to those of cloud offerings and enables a decision guidance to choose an appropriate configuration when deploying it. Based on a Structured Literature Review (SLR) to show the benchmarking and simulation efforts, I suggest a checklist for conducting fair, repeatable and meaningful benchmarks with a focus on documenting the experiments.","PeriodicalId":146702,"journal":{"name":"Bamberger Beiträge zur Wirtschaftsinformatik und Angewandten Informatik","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"SeMoDe – Simulation and Benchmarking Pipeline for Function as a Service\",\"authors\":\"Johannes Manner\",\"doi\":\"10.20378/irb-52238\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud computing started with the promise of delivering computing resources elastically at scale, pay per use and on demand self-service to name a few capabilities. In early 2016, Amazon Web Services (AWS) launched a new product called AWS Lambda which started the so called serverless hype and established a new cloud delivery model, namely Function as a Service (FaaS). FaaS offerings keep the promise of delivering computing resources on demand. They dynamically scale up and down function instances and introduce the most fine-grained billing model across all as-a-service offerings by accounting on a milliseconds basis. Despite this flexibility and the possibility to concentrate on the business functionality, a FaaS user loses operational control. Only a few configuration options remain to tune the functions. The first pay-as-you-go billing model raises new questions for performance-cost trade-offs. In order to choose a suitable configuration dependent on the use case and get a solid understanding of performance impact of FaaS platforms, SeMoDe implements a benchmarking and simulation pipeline. It calibrates a physical developer machine, simulates the function in different settings which are comparable to those of cloud offerings and enables a decision guidance to choose an appropriate configuration when deploying it. Based on a Structured Literature Review (SLR) to show the benchmarking and simulation efforts, I suggest a checklist for conducting fair, repeatable and meaningful benchmarks with a focus on documenting the experiments.\",\"PeriodicalId\":146702,\"journal\":{\"name\":\"Bamberger Beiträge zur Wirtschaftsinformatik und Angewandten Informatik\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bamberger Beiträge zur Wirtschaftsinformatik und Angewandten Informatik\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.20378/irb-52238\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bamberger Beiträge zur Wirtschaftsinformatik und Angewandten Informatik","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20378/irb-52238","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SeMoDe – Simulation and Benchmarking Pipeline for Function as a Service
Cloud computing started with the promise of delivering computing resources elastically at scale, pay per use and on demand self-service to name a few capabilities. In early 2016, Amazon Web Services (AWS) launched a new product called AWS Lambda which started the so called serverless hype and established a new cloud delivery model, namely Function as a Service (FaaS). FaaS offerings keep the promise of delivering computing resources on demand. They dynamically scale up and down function instances and introduce the most fine-grained billing model across all as-a-service offerings by accounting on a milliseconds basis. Despite this flexibility and the possibility to concentrate on the business functionality, a FaaS user loses operational control. Only a few configuration options remain to tune the functions. The first pay-as-you-go billing model raises new questions for performance-cost trade-offs. In order to choose a suitable configuration dependent on the use case and get a solid understanding of performance impact of FaaS platforms, SeMoDe implements a benchmarking and simulation pipeline. It calibrates a physical developer machine, simulates the function in different settings which are comparable to those of cloud offerings and enables a decision guidance to choose an appropriate configuration when deploying it. Based on a Structured Literature Review (SLR) to show the benchmarking and simulation efforts, I suggest a checklist for conducting fair, repeatable and meaningful benchmarks with a focus on documenting the experiments.