{"title":"云数据中心虚拟机实时分配建模与仿真","authors":"S. Jason","doi":"10.35940/ijeat.e4182.0612523","DOIUrl":null,"url":null,"abstract":"For dynamic resource scheduling in cloud data centers, a novel lightweight simulation system is proposed; two existing simulation systems at the application level for cloud computing are reviewed; and results gained using the suggested simulation system are examined and discussed. The usage of resources and energy efficiency in cloud data centers can be improved by load balancing and the consolidation of virtual machines. An aspect of dynamic virtual machine consolidation that directly affects resource usage and the quality of service the system is delivering is the timing of when it is ideal to reallocate Virtual Machines from an overloaded host [1]. Because server overloads result in a lack of resources and a decline in application performance, they have an impact on quality of service. In order to determine the best answer, existing approaches to the problem of host overload detection typically rely on statistical analysis inspired by nature. These strategies' drawbacks include the fact that they provide less-than-ideal outcomes and prevent the explicit articulation of a Quality-of-Service target. By optimizing the mean inter-migration time under the defined Quality of Service target ideally, we present a novel method for detecting host overload for any stationary workload that is known and a particular state configuration [2]. We demonstrate that our technique exceeds the best benchmark algorithm and offers over 88%of the performance of the ideal offline algorithm through simulations with real-world workload traces from more than a thousand Virtual Machines.","PeriodicalId":13981,"journal":{"name":"International Journal of Engineering and Advanced Technology","volume":"10 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modeling and Simulation of Real-Time Virtual Machine Allocation in a Cloud Data Center\",\"authors\":\"S. Jason\",\"doi\":\"10.35940/ijeat.e4182.0612523\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For dynamic resource scheduling in cloud data centers, a novel lightweight simulation system is proposed; two existing simulation systems at the application level for cloud computing are reviewed; and results gained using the suggested simulation system are examined and discussed. The usage of resources and energy efficiency in cloud data centers can be improved by load balancing and the consolidation of virtual machines. An aspect of dynamic virtual machine consolidation that directly affects resource usage and the quality of service the system is delivering is the timing of when it is ideal to reallocate Virtual Machines from an overloaded host [1]. Because server overloads result in a lack of resources and a decline in application performance, they have an impact on quality of service. In order to determine the best answer, existing approaches to the problem of host overload detection typically rely on statistical analysis inspired by nature. These strategies' drawbacks include the fact that they provide less-than-ideal outcomes and prevent the explicit articulation of a Quality-of-Service target. By optimizing the mean inter-migration time under the defined Quality of Service target ideally, we present a novel method for detecting host overload for any stationary workload that is known and a particular state configuration [2]. We demonstrate that our technique exceeds the best benchmark algorithm and offers over 88%of the performance of the ideal offline algorithm through simulations with real-world workload traces from more than a thousand Virtual Machines.\",\"PeriodicalId\":13981,\"journal\":{\"name\":\"International Journal of Engineering and Advanced Technology\",\"volume\":\"10 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Engineering and Advanced Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.35940/ijeat.e4182.0612523\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Engineering and Advanced Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.35940/ijeat.e4182.0612523","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Modeling and Simulation of Real-Time Virtual Machine Allocation in a Cloud Data Center
For dynamic resource scheduling in cloud data centers, a novel lightweight simulation system is proposed; two existing simulation systems at the application level for cloud computing are reviewed; and results gained using the suggested simulation system are examined and discussed. The usage of resources and energy efficiency in cloud data centers can be improved by load balancing and the consolidation of virtual machines. An aspect of dynamic virtual machine consolidation that directly affects resource usage and the quality of service the system is delivering is the timing of when it is ideal to reallocate Virtual Machines from an overloaded host [1]. Because server overloads result in a lack of resources and a decline in application performance, they have an impact on quality of service. In order to determine the best answer, existing approaches to the problem of host overload detection typically rely on statistical analysis inspired by nature. These strategies' drawbacks include the fact that they provide less-than-ideal outcomes and prevent the explicit articulation of a Quality-of-Service target. By optimizing the mean inter-migration time under the defined Quality of Service target ideally, we present a novel method for detecting host overload for any stationary workload that is known and a particular state configuration [2]. We demonstrate that our technique exceeds the best benchmark algorithm and offers over 88%of the performance of the ideal offline algorithm through simulations with real-world workload traces from more than a thousand Virtual Machines.