{"title":"Machine Learning, Deep Learning-Based Optimization in Multilayered Cloud","authors":"Punit Gupta, Mayank Kumar Goyal","doi":"10.1201/9781003185376-2","DOIUrl":null,"url":null,"abstract":"The ongoing COVID-19 pandemic has resulted in the loss of lives and economic losses. In this scenario, social distancing is the only way to protect ourselves. In such a scenario, to boost the economy, a large number of industries and businesses have shifted their system to cloud, for example education, shipping, training and many more globally. To support this transition cloud services are the only solution to provide reliable and secure services to the user to sustain their business. Due to this, the load on the existing cloud infrastructure has drastically increased. So it is the responsibility of the cloud to manage the load on the existing infrastructure to maintain reliability and provide high-quality services to the user. Task allocation in the cloud is one of the key features to optimize the performance of cloud infrastructure. In this work, we have proposed a prediction-based technique using a pre-trained neural network to find a reliable resource for a task based on previous training and the history of cloud and its performance to optimize the performance in overloaded and underloaded situations. The main aim of this work is to reduce faults and provide high performance by reducing scheduling time, execution time, average start time, average finish time and network load. The proposed model uses the Big Bang-Big Crunch algorithm to generate huge datasets for training our neural model. The accuracy of the BB-BC ANN model is improved with 98% accuracy. © 2022 selection and editorial matter, Punit Gupta, Mayank Kumar Goyal, Sudeshna Chakraborty, Ahmed A Elngar;individual chapters, the contributors.","PeriodicalId":331208,"journal":{"name":"Machine Learning and Optimization Models for Optimization in Cloud","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Learning and Optimization Models for Optimization in Cloud","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/9781003185376-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
多层云中的机器学习、基于深度学习的优化
持续的COVID-19大流行造成了生命和经济损失。在这种情况下,保持社交距离是保护我们自己的唯一方法。在这种情况下,为了促进经济发展,大量行业和企业将其系统转移到云上,例如全球范围内的教育、航运、培训等。为了支持这种转变,云服务是为用户提供可靠和安全的服务以维持其业务的唯一解决方案。因此,现有云基础设施的负载急剧增加。因此,管理现有基础设施上的负载以保持可靠性并向用户提供高质量服务是云的责任。云中的任务分配是优化云基础设施性能的关键特性之一。在这项工作中,我们提出了一种基于预测的技术,使用预训练的神经网络根据以前的训练和云的历史及其性能为任务找到可靠的资源,以优化过载和欠负载情况下的性能。这项工作的主要目的是通过减少调度时间、执行时间、平均开始时间、平均完成时间和网络负载来减少故障并提供高性能。提出的模型使用大爆炸-大压缩算法生成巨大的数据集来训练我们的神经模型。BB-BC神经网络模型的准确率提高到98%。©2022选择和编辑事项,Punit Gupta, Mayank Kumar Goyal, Sudeshna Chakraborty, Ahmed A Elngar;个别章节,贡献者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。