{"title":"Q-learning with function Approximator for clustering based Optimal resource Allocation in fog environment","authors":"Chanchal Ahlawat, R. Krishnamurthi","doi":"10.1145/3549206.3549230","DOIUrl":null,"url":null,"abstract":"Fog computing is a new paradigm for delivering services close to the user. The exponential growth of IoT devices and big data complicates fog resource distribution. Inefficient resource allocation can result in resource scarcity and the inability to finish a task assignment on time. As a result, correct allocation is required to improve the efficiency of fog resources. Resource allocation is a difficult task with heterogeneous constraint resources. As fog computing deals with real-time data, therefore, needs resource allocation in real-time that increases the necessity of having appropriate and optimal resource allocation in real-time. Therefore, this paper targets optimal resource allocation. To address the resource allocation problem, Q-learning with function Approximator for clustering based Optimal resource Allocation (QL(FA)-CORA) model is designed, considering the problem as a decision making problem, reinforcement learning method is used to solve it. Problem formulation is done using the Markov decision process. Clustering is done to reduce the service time. Proposed an optimal resource allocation using the QL function approximator (ORA- QLFA) algorithm. to enhance the efficiency and performance of the proposed fog environment. Simulations are done to evaluate the validation of the proposed algorithm. Also, comparisons are made with linear Q networks using different parameters such as expected discounted return, maximum steps taken by the fog resource controller, etc. Simulation results show the proposed algorithm performs better in all the cases and converged to optimal results after a few iterations rather than a linear Q network.","PeriodicalId":199675,"journal":{"name":"Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3549206.3549230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Fog computing is a new paradigm for delivering services close to the user. The exponential growth of IoT devices and big data complicates fog resource distribution. Inefficient resource allocation can result in resource scarcity and the inability to finish a task assignment on time. As a result, correct allocation is required to improve the efficiency of fog resources. Resource allocation is a difficult task with heterogeneous constraint resources. As fog computing deals with real-time data, therefore, needs resource allocation in real-time that increases the necessity of having appropriate and optimal resource allocation in real-time. Therefore, this paper targets optimal resource allocation. To address the resource allocation problem, Q-learning with function Approximator for clustering based Optimal resource Allocation (QL(FA)-CORA) model is designed, considering the problem as a decision making problem, reinforcement learning method is used to solve it. Problem formulation is done using the Markov decision process. Clustering is done to reduce the service time. Proposed an optimal resource allocation using the QL function approximator (ORA- QLFA) algorithm. to enhance the efficiency and performance of the proposed fog environment. Simulations are done to evaluate the validation of the proposed algorithm. Also, comparisons are made with linear Q networks using different parameters such as expected discounted return, maximum steps taken by the fog resource controller, etc. Simulation results show the proposed algorithm performs better in all the cases and converged to optimal results after a few iterations rather than a linear Q network.