Optimizing Immersive Services With Parallel In-Network Rendering and Deep RL

Manel Gherari;Adyson Maia;Mouhamad Dieye;Halima Elbiaze;Yacine Ghamri-Doudane;Roch H. Glitho
{"title":"Optimizing Immersive Services With Parallel In-Network Rendering and Deep RL","authors":"Manel Gherari;Adyson Maia;Mouhamad Dieye;Halima Elbiaze;Yacine Ghamri-Doudane;Roch H. Glitho","doi":"10.1109/TMLCN.2026.3666742","DOIUrl":null,"url":null,"abstract":"This paper addresses the challenge of delivering low-latency, scalable immersive experiences by exploiting a hybrid continuum of cloud, edge, and In-Network Computing (INC) resources. Indeed, delivering low-latency, scalable immersive experiences requires the transfer of a large amount of digital assets of different sizes, many of them consisting of large, static scene elements corresponding to service-specific and user-specific components. We argue in this paper that such elements could be separated within an in-network rendering farm while dynamically caching popular assets and synchronizing rapidly changing, user-centric data at INC, Edge or Cloud nodes. Still all theses need to be orchestrated efficiently. To efficiently orchestrate these heterogeneous resources, we formulate in this paper a multi-objective optimization problem—maximizing resource efficiency, minimizing end-to-end latency, and maximizing user request acceptance. This optimization problem is then solved via a deep reinforcement learning (DRL) framework that adaptively assigns functions across all layers in real time. The purpose of our proposed popularity-based replication and pre-caching is to further reduce latency for the most frequently accessed assets, while we offload lightweight rendering operations directly onto programmable switches to cut down on round-trip delays. Extensive simulations, benchmarked against multiple baselines, demonstrate that our approach consistently maintains sub-20ms end-to-end delays and achieves superior resource utilization efficiency under dynamic workloads. These results validate the potential of integrating INC into the Compute Continuum and use a DRL-driven orchestration, both together allowing to meet the stringent Quality of Service (QoS) and Quality of Experience (QoE) requirements of next-generation immersive applications.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"491-513"},"PeriodicalIF":0.0000,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11402906","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11402906/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper addresses the challenge of delivering low-latency, scalable immersive experiences by exploiting a hybrid continuum of cloud, edge, and In-Network Computing (INC) resources. Indeed, delivering low-latency, scalable immersive experiences requires the transfer of a large amount of digital assets of different sizes, many of them consisting of large, static scene elements corresponding to service-specific and user-specific components. We argue in this paper that such elements could be separated within an in-network rendering farm while dynamically caching popular assets and synchronizing rapidly changing, user-centric data at INC, Edge or Cloud nodes. Still all theses need to be orchestrated efficiently. To efficiently orchestrate these heterogeneous resources, we formulate in this paper a multi-objective optimization problem—maximizing resource efficiency, minimizing end-to-end latency, and maximizing user request acceptance. This optimization problem is then solved via a deep reinforcement learning (DRL) framework that adaptively assigns functions across all layers in real time. The purpose of our proposed popularity-based replication and pre-caching is to further reduce latency for the most frequently accessed assets, while we offload lightweight rendering operations directly onto programmable switches to cut down on round-trip delays. Extensive simulations, benchmarked against multiple baselines, demonstrate that our approach consistently maintains sub-20ms end-to-end delays and achieves superior resource utilization efficiency under dynamic workloads. These results validate the potential of integrating INC into the Compute Continuum and use a DRL-driven orchestration, both together allowing to meet the stringent Quality of Service (QoS) and Quality of Experience (QoE) requirements of next-generation immersive applications.
利用并行网络渲染和深度强化学习优化沉浸式服务
本文通过利用云、边缘和网络内计算(INC)资源的混合连续体,解决了提供低延迟、可扩展沉浸式体验的挑战。事实上,提供低延迟、可扩展的沉浸式体验需要传输大量不同大小的数字资产,其中许多资产由大型静态场景元素组成,对应于特定服务和特定用户的组件。我们在本文中认为,这些元素可以在网络内的渲染场中分离,同时动态缓存流行的资产,并在INC、Edge或Cloud节点上同步快速变化的、以用户为中心的数据。然而,所有这些都需要有效地安排。为了有效地协调这些异构资源,我们在本文中制定了一个多目标优化问题-最大化资源效率,最小化端到端延迟,最大化用户请求接受。然后通过深度强化学习(DRL)框架解决这个优化问题,该框架可以实时自适应地在所有层上分配函数。我们提出的基于流行度的复制和预缓存的目的是进一步减少最频繁访问的资产的延迟,同时我们将轻量级渲染操作直接卸载到可编程交换机上,以减少往返延迟。广泛的模拟,针对多个基线进行基准测试,证明我们的方法始终保持低于20ms的端到端延迟,并在动态工作负载下实现卓越的资源利用效率。这些结果验证了将INC集成到计算连续体中并使用drl驱动的编排的潜力,两者一起可以满足下一代沉浸式应用程序严格的服务质量(QoS)和体验质量(QoE)要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书