Hanzhe Li, Sherry X Wang, Fu Shang, Kaiyi Niu, Runze Song
{"title":"大型语言模型在云计算中的应用:使用真实世界数据的实证研究","authors":"Hanzhe Li, Sherry X Wang, Fu Shang, Kaiyi Niu, Runze Song","doi":"10.55524/ijircst.2024.12.4.10","DOIUrl":null,"url":null,"abstract":"This study investigates the integration of Large Language Models (LLMs) in cloud computing, focusing on their impact on resource allocation and management. The research employs Bayesian inference and Markov Decision Processes (MDPs) to enhance predictive accuracy and decision-making efficiency. Over a month, data collected from AWS, GCP, Azure, IBM, and Oracle reveals significant improvements in CPU utilization, memory usage, network latency, and storage performance. LLMs demonstrated superior performance compared to traditional models, optimizing task scheduling and reducing idle times. Bayesian inference refined resource predictions, while MDPs provided a structured approach to dynamic optimization, resulting in lower latency and better system efficiency. The findings suggest that integrating LLMs can transform cloud service management, offering enhanced performance, reliability, and cost savings. Future research should explore long-term trends, security implications, and the ethical aspects of AI deployment in cloud environments.","PeriodicalId":218345,"journal":{"name":"International Journal of Innovative Research in Computer Science and Technology","volume":"25 60","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Applications of Large Language Models in Cloud Computing: An Empirical Study Using Real-world Data\",\"authors\":\"Hanzhe Li, Sherry X Wang, Fu Shang, Kaiyi Niu, Runze Song\",\"doi\":\"10.55524/ijircst.2024.12.4.10\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study investigates the integration of Large Language Models (LLMs) in cloud computing, focusing on their impact on resource allocation and management. The research employs Bayesian inference and Markov Decision Processes (MDPs) to enhance predictive accuracy and decision-making efficiency. Over a month, data collected from AWS, GCP, Azure, IBM, and Oracle reveals significant improvements in CPU utilization, memory usage, network latency, and storage performance. LLMs demonstrated superior performance compared to traditional models, optimizing task scheduling and reducing idle times. Bayesian inference refined resource predictions, while MDPs provided a structured approach to dynamic optimization, resulting in lower latency and better system efficiency. The findings suggest that integrating LLMs can transform cloud service management, offering enhanced performance, reliability, and cost savings. Future research should explore long-term trends, security implications, and the ethical aspects of AI deployment in cloud environments.\",\"PeriodicalId\":218345,\"journal\":{\"name\":\"International Journal of Innovative Research in Computer Science and Technology\",\"volume\":\"25 60\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Innovative Research in Computer Science and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.55524/ijircst.2024.12.4.10\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Innovative Research in Computer Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.55524/ijircst.2024.12.4.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Applications of Large Language Models in Cloud Computing: An Empirical Study Using Real-world Data
This study investigates the integration of Large Language Models (LLMs) in cloud computing, focusing on their impact on resource allocation and management. The research employs Bayesian inference and Markov Decision Processes (MDPs) to enhance predictive accuracy and decision-making efficiency. Over a month, data collected from AWS, GCP, Azure, IBM, and Oracle reveals significant improvements in CPU utilization, memory usage, network latency, and storage performance. LLMs demonstrated superior performance compared to traditional models, optimizing task scheduling and reducing idle times. Bayesian inference refined resource predictions, while MDPs provided a structured approach to dynamic optimization, resulting in lower latency and better system efficiency. The findings suggest that integrating LLMs can transform cloud service management, offering enhanced performance, reliability, and cost savings. Future research should explore long-term trends, security implications, and the ethical aspects of AI deployment in cloud environments.