{"title":"Optimizing Node-Level Data Access Time Using Cluster-Based Deep Reinforcement Learning Models","authors":"Peerzada Hamid Ahmad, Munishwar Rai","doi":"10.1002/cpe.70232","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Distributed systems must have effective node-level data access to function optimally. Current techniques often suffer from delays and inaccurate information retrieval, necessitating novel strategies to maximize access duration. Although grouping extends the life of Wireless Sensor Networks (WSN) and saves energy, energy communication has not been thoroughly investigated in existing WSNs. This study proposes a unique Cluster-based Deep Reinforcement Learning (CDRL) approach to enhance information access speed at the node level. By grouping nodes according to connection structure and information accessibility patterns, the proposed CDRL model makes information organization and retrieval more effective. In the CDRL approach, neighboring nodes within a group select a suitable Cluster Head (CH) by monitoring environmental factors such as power consumption and proximity to the Base Station (BS). Each neighboring node chooses the best group based on minimizing energy usage and maximizing network lifespan. The CDRL method computes node weights based on movement and available battery power, with the node having the highest weight becoming the principal CH. When the CH's battery power depletes beyond a certain point, secondary clustering heads are chosen. This method reduces clustered management overhead and uses battery energy in a distributed manner, extending network life. The CH with the greatest reward point is chosen for transmitting information. The results indicate that combining reinforcement learning with cluster-based tactics significantly improves decentralized networks' responsiveness and effectiveness in information handling. Energy savings of 7.41%, 2.79%, 3.27%, and 4.03% are attained for deployed nodes of 100, 200, 300, and 400, respectively. The study shows that the CDRL method significantly decreases information access periods and routes packets faster than other methods.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70232","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Distributed systems must have effective node-level data access to function optimally. Current techniques often suffer from delays and inaccurate information retrieval, necessitating novel strategies to maximize access duration. Although grouping extends the life of Wireless Sensor Networks (WSN) and saves energy, energy communication has not been thoroughly investigated in existing WSNs. This study proposes a unique Cluster-based Deep Reinforcement Learning (CDRL) approach to enhance information access speed at the node level. By grouping nodes according to connection structure and information accessibility patterns, the proposed CDRL model makes information organization and retrieval more effective. In the CDRL approach, neighboring nodes within a group select a suitable Cluster Head (CH) by monitoring environmental factors such as power consumption and proximity to the Base Station (BS). Each neighboring node chooses the best group based on minimizing energy usage and maximizing network lifespan. The CDRL method computes node weights based on movement and available battery power, with the node having the highest weight becoming the principal CH. When the CH's battery power depletes beyond a certain point, secondary clustering heads are chosen. This method reduces clustered management overhead and uses battery energy in a distributed manner, extending network life. The CH with the greatest reward point is chosen for transmitting information. The results indicate that combining reinforcement learning with cluster-based tactics significantly improves decentralized networks' responsiveness and effectiveness in information handling. Energy savings of 7.41%, 2.79%, 3.27%, and 4.03% are attained for deployed nodes of 100, 200, 300, and 400, respectively. The study shows that the CDRL method significantly decreases information access periods and routes packets faster than other methods.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.