{"title":"Assessing QoS consistency in cloud-based software-as-a-service deployments","authors":"Robert O'Dywer, S. Neville","doi":"10.1109/PACRIM.2017.8121889","DOIUrl":null,"url":null,"abstract":"Cloud-deployed Software-as-a-Service (SaaS) solutions have become a common global software deployment regime. For SaaS providers success is increasingly tied to social media feedback, customer reviews, and referrals. As such, ensuring sufficiently few users experience low (or poor) quality of service (QoS) levels has become an important concern. Cloud-based SaaS QoS is primarily driven by: i) the incoming workload's pace and complexity, ii) the SaaS system's design and implementation, and iii) the cloud platform's own induced QoS variabilities. Of these, SaaS software engineers generally have the least control over (iii), making it important to properly understand and quantify. This work empirically assesses (iii) by applying statistically rigorous QoS testing to an industry-held cloud-deployed SaaS system. Identical SaaS system instances are instantiated into the same commercial cloud platform and exercised via identical synthetic in-coming workloads. The resulting run-time QoS statistical distributions of each SaaS instance are then pairwise compared via distribution-free goodness-of-fit tests. A high degree of (iii) induced statistical dissimilarity is observed, suggesting significant care is required when seeking to make QoS envelope predictions from per-instance observed SaaS QoS results. This also suggests deeper more formal efforts may be required to better understand and characterize the cloud-induced SaaS QoS consistency issues that arise within modern SaaS deployments.","PeriodicalId":308087,"journal":{"name":"2017 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PACRIM.2017.8121889","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Cloud-deployed Software-as-a-Service (SaaS) solutions have become a common global software deployment regime. For SaaS providers success is increasingly tied to social media feedback, customer reviews, and referrals. As such, ensuring sufficiently few users experience low (or poor) quality of service (QoS) levels has become an important concern. Cloud-based SaaS QoS is primarily driven by: i) the incoming workload's pace and complexity, ii) the SaaS system's design and implementation, and iii) the cloud platform's own induced QoS variabilities. Of these, SaaS software engineers generally have the least control over (iii), making it important to properly understand and quantify. This work empirically assesses (iii) by applying statistically rigorous QoS testing to an industry-held cloud-deployed SaaS system. Identical SaaS system instances are instantiated into the same commercial cloud platform and exercised via identical synthetic in-coming workloads. The resulting run-time QoS statistical distributions of each SaaS instance are then pairwise compared via distribution-free goodness-of-fit tests. A high degree of (iii) induced statistical dissimilarity is observed, suggesting significant care is required when seeking to make QoS envelope predictions from per-instance observed SaaS QoS results. This also suggests deeper more formal efforts may be required to better understand and characterize the cloud-induced SaaS QoS consistency issues that arise within modern SaaS deployments.