{"title":"Enhancing Edge Environment Scalability: Leveraging Kubernetes for Container Orchestration and Optimization","authors":"K. Aruna, Pradeep Gurunathan","doi":"10.1002/cpe.8303","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Kubernetes is an open-source container orchestration platform, offers a comprehensive suite of features for managing containerized applications effectively. These features encompass horizontal scaling, per-node-pool cluster scaling and automated resource request adjustments. This research endeavors to harness these capabilities to address the limitations experienced by fog servers in edge environments, particularly those arising from restricted network connectivity and scalability challenges. In this research paper, the primary focus is on Kubernetes role of enhancing scalability, providing a robust framework for managing containerized applications. The proposed approach involves creating a predefined number of pods and containers within a Kubernetes cluster, specifically designed to efficiently handle incoming requests while optimizing CPU and memory usage. This method implements a microservice architecture for the web tier, with separate pods for the front end, back end and database, ensuring modular and scalable design. All pods communicate and integrate through REST APIs, facilitating seamless interaction and data exchange between the services. When handling web requests, the approach enables and controls both internal and external networks, ensuring secure and efficient communication. The analysis then examines the CPU and memory utilization of the pods, as well as node bandwidth, to provide a comprehensive evaluation of container scalability and performance within the Kubernetes cluster. These findings effectively demonstrate Kubernetes' capability in managing container scalability and optimizing resource utilization, highlighting its efficiency and robustness in a microservice environment.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 28","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8303","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Kubernetes is an open-source container orchestration platform, offers a comprehensive suite of features for managing containerized applications effectively. These features encompass horizontal scaling, per-node-pool cluster scaling and automated resource request adjustments. This research endeavors to harness these capabilities to address the limitations experienced by fog servers in edge environments, particularly those arising from restricted network connectivity and scalability challenges. In this research paper, the primary focus is on Kubernetes role of enhancing scalability, providing a robust framework for managing containerized applications. The proposed approach involves creating a predefined number of pods and containers within a Kubernetes cluster, specifically designed to efficiently handle incoming requests while optimizing CPU and memory usage. This method implements a microservice architecture for the web tier, with separate pods for the front end, back end and database, ensuring modular and scalable design. All pods communicate and integrate through REST APIs, facilitating seamless interaction and data exchange between the services. When handling web requests, the approach enables and controls both internal and external networks, ensuring secure and efficient communication. The analysis then examines the CPU and memory utilization of the pods, as well as node bandwidth, to provide a comprehensive evaluation of container scalability and performance within the Kubernetes cluster. These findings effectively demonstrate Kubernetes' capability in managing container scalability and optimizing resource utilization, highlighting its efficiency and robustness in a microservice environment.
Kubernetes 是一个开源容器编排平台,为有效管理容器化应用程序提供了一整套功能。这些功能包括水平扩展、按节点池集群扩展和自动资源请求调整。本研究致力于利用这些功能来解决边缘环境中雾服务器遇到的限制,特别是网络连接受限和可扩展性挑战带来的限制。本研究论文的主要重点是 Kubernetes 在增强可扩展性方面的作用,它为管理容器化应用提供了一个强大的框架。所提出的方法包括在 Kubernetes 集群中创建预定数量的 pod 和容器,专门用于高效处理传入请求,同时优化 CPU 和内存的使用。这种方法为网络层实现了微服务架构,为前端、后端和数据库提供了独立的 pod,确保了模块化和可扩展的设计。所有 pod 都通过 REST API 进行通信和集成,促进服务之间的无缝交互和数据交换。在处理网络请求时,该方法启用并控制内部和外部网络,确保安全高效的通信。然后,分析会检查 pod 的 CPU 和内存利用率以及节点带宽,以全面评估 Kubernetes 集群内容器的可扩展性和性能。这些研究结果有效证明了 Kubernetes 在管理容器可扩展性和优化资源利用率方面的能力,凸显了其在微服务环境中的效率和稳健性。
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.