{"title":"Online QoS/QoE-Driven SFC Orchestration Leveraging a DRL Approach in SDN/NFV Enabled Networks","authors":"Mohamed Escheikh, Wiem Taktak","doi":"10.1007/s11277-024-11389-5","DOIUrl":null,"url":null,"abstract":"<p>The proliferation of the ever-increasing number of highly heterogeneous smart devices and the emerging of a wide range of diverse applications in 5G mobile network ecosystems impose to tackle new set of raising challenges related to agile and automated service orchestration and management. Fully leveraging key enablers technologies such as software defined network, network function virtualization and machine learning capabilities in such environment is of paramount importance to address service function chaining (SFC) orchestration issues according to user requirements and network constraints. To meet these challenges, we propose in this paper a deep reinforcement learning (DRL) approach to investigate online quality of experience (QoE)/quality of service (QoS) aware SFC orchestration problem. The objective is to fulfill intelligent, elastic and automated virtual network functions deployment optimizing QoE while respecting QoS constraints. We implement DRL approach through Double Deep-Q-Network algorithm. We investigate experimental simulations to apprehend agent behavior along a learning phase followed by a testing and evaluation phase for two physical substrate network scales. The testing phase is defined as the last 100 runs of the learning phase where agent reaches on average QoE threshold score (<span>\\(QoE_{Th-Sc}\\)</span>). In a first set of experiments, we highlight the impact of hyper-parameters (Learning Rate (<i>LR</i>) and Batch Size (<i>BS</i>)) tuning on better solving sequential decision problem related to SFC orchestration for a given <span>\\(QoE_{Th-Sc}\\)</span>. This investigation leads us to choose the more suitable pair (<i>LR</i>, <i>BS</i>) enabling acceptable learning quality. In a second set of experiments we examine DRL agent capacity to enhance learning quality while meeting performance-convergence compromise. This is achieved by progressively increasing <span>\\(QoE_{Th-Sc}\\)</span>.</p>","PeriodicalId":23827,"journal":{"name":"Wireless Personal Communications","volume":"198 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wireless Personal Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11277-024-11389-5","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
The proliferation of the ever-increasing number of highly heterogeneous smart devices and the emerging of a wide range of diverse applications in 5G mobile network ecosystems impose to tackle new set of raising challenges related to agile and automated service orchestration and management. Fully leveraging key enablers technologies such as software defined network, network function virtualization and machine learning capabilities in such environment is of paramount importance to address service function chaining (SFC) orchestration issues according to user requirements and network constraints. To meet these challenges, we propose in this paper a deep reinforcement learning (DRL) approach to investigate online quality of experience (QoE)/quality of service (QoS) aware SFC orchestration problem. The objective is to fulfill intelligent, elastic and automated virtual network functions deployment optimizing QoE while respecting QoS constraints. We implement DRL approach through Double Deep-Q-Network algorithm. We investigate experimental simulations to apprehend agent behavior along a learning phase followed by a testing and evaluation phase for two physical substrate network scales. The testing phase is defined as the last 100 runs of the learning phase where agent reaches on average QoE threshold score (\(QoE_{Th-Sc}\)). In a first set of experiments, we highlight the impact of hyper-parameters (Learning Rate (LR) and Batch Size (BS)) tuning on better solving sequential decision problem related to SFC orchestration for a given \(QoE_{Th-Sc}\). This investigation leads us to choose the more suitable pair (LR, BS) enabling acceptable learning quality. In a second set of experiments we examine DRL agent capacity to enhance learning quality while meeting performance-convergence compromise. This is achieved by progressively increasing \(QoE_{Th-Sc}\).
期刊介绍:
The Journal on Mobile Communication and Computing ...
Publishes tutorial, survey, and original research papers addressing mobile communications and computing;
Investigates theoretical, engineering, and experimental aspects of radio communications, voice, data, images, and multimedia;
Explores propagation, system models, speech and image coding, multiple access techniques, protocols, performance evaluation, radio local area networks, and networking and architectures, etc.;
98% of authors who answered a survey reported that they would definitely publish or probably publish in the journal again.
Wireless Personal Communications is an archival, peer reviewed, scientific and technical journal addressing mobile communications and computing. It investigates theoretical, engineering, and experimental aspects of radio communications, voice, data, images, and multimedia. A partial list of topics included in the journal is: propagation, system models, speech and image coding, multiple access techniques, protocols performance evaluation, radio local area networks, and networking and architectures.
In addition to the above mentioned areas, the journal also accepts papers that deal with interdisciplinary aspects of wireless communications along with: big data and analytics, business and economy, society, and the environment.
The journal features five principal types of papers: full technical papers, short papers, technical aspects of policy and standardization, letters offering new research thoughts and experimental ideas, and invited papers on important and emerging topics authored by renowned experts.