在基于云的物联网驱动的业务流程中平衡风险和成本的强化学习方法

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Amina Ahmed Nacer, Mohammed Riyadh Abdmeziem
{"title":"在基于云的物联网驱动的业务流程中平衡风险和成本的强化学习方法","authors":"Amina Ahmed Nacer,&nbsp;Mohammed Riyadh Abdmeziem","doi":"10.1002/cpe.70212","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Secure and cost-efficient deployment of Internet of Things (IoT)-driven business processes (BPs) in multicloud environments is a complex task, particularly under dynamic workloads and strict risk or budgetary constraints. We address this challenge by leveraging deep reinforcement learning (DRL) to determine optimal task-to-cloud allocations that comply with user-defined cost or risk thresholds. Our approach introduces a <i>Q</i>-learning model that integrates a confidentiality risk metric and a cost evaluation function into its learning process. Unlike traditional heuristics, the DRL agent adapts to evolving constraints through trial-and-error interaction with its environment. Experimental results on a diverse set of deployment configurations show that our approach achieves 25%–30% reductions in both risk and cost compared to heuristic baselines, while satisfying thresholds in 75% of cases. It also demonstrates strong adaptability to dynamic changes, including task additions and resource fluctuations. These findings highlight the potential of reinforcement learning for reliable, constraint-aware deployment in cloud-based IoT systems.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward a Reinforcement Learning Approach for Balancing Risk and Cost in Cloud-Based IoT-Driven Business Processes\",\"authors\":\"Amina Ahmed Nacer,&nbsp;Mohammed Riyadh Abdmeziem\",\"doi\":\"10.1002/cpe.70212\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Secure and cost-efficient deployment of Internet of Things (IoT)-driven business processes (BPs) in multicloud environments is a complex task, particularly under dynamic workloads and strict risk or budgetary constraints. We address this challenge by leveraging deep reinforcement learning (DRL) to determine optimal task-to-cloud allocations that comply with user-defined cost or risk thresholds. Our approach introduces a <i>Q</i>-learning model that integrates a confidentiality risk metric and a cost evaluation function into its learning process. Unlike traditional heuristics, the DRL agent adapts to evolving constraints through trial-and-error interaction with its environment. Experimental results on a diverse set of deployment configurations show that our approach achieves 25%–30% reductions in both risk and cost compared to heuristic baselines, while satisfying thresholds in 75% of cases. It also demonstrates strong adaptability to dynamic changes, including task additions and resource fluctuations. These findings highlight the potential of reinforcement learning for reliable, constraint-aware deployment in cloud-based IoT systems.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 21-22\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70212\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70212","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

在多云环境中安全且经济高效地部署物联网(IoT)驱动的业务流程(bp)是一项复杂的任务,特别是在动态工作负载和严格的风险或预算限制下。我们通过利用深度强化学习(DRL)来确定符合用户定义的成本或风险阈值的最佳任务到云分配来解决这一挑战。我们的方法引入了一个q学习模型,该模型将机密性风险度量和成本评估函数集成到其学习过程中。与传统的启发式方法不同,DRL代理通过与环境的试错交互来适应不断变化的约束。在不同部署配置集上的实验结果表明,与启发式基线相比,我们的方法在风险和成本方面都降低了25%-30%,同时在75%的情况下满足阈值。它还显示了对动态变化的强大适应性,包括任务的增加和资源的波动。这些发现突出了强化学习在基于云的物联网系统中可靠、约束感知部署的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Toward a Reinforcement Learning Approach for Balancing Risk and Cost in Cloud-Based IoT-Driven Business Processes

Secure and cost-efficient deployment of Internet of Things (IoT)-driven business processes (BPs) in multicloud environments is a complex task, particularly under dynamic workloads and strict risk or budgetary constraints. We address this challenge by leveraging deep reinforcement learning (DRL) to determine optimal task-to-cloud allocations that comply with user-defined cost or risk thresholds. Our approach introduces a Q-learning model that integrates a confidentiality risk metric and a cost evaluation function into its learning process. Unlike traditional heuristics, the DRL agent adapts to evolving constraints through trial-and-error interaction with its environment. Experimental results on a diverse set of deployment configurations show that our approach achieves 25%–30% reductions in both risk and cost compared to heuristic baselines, while satisfying thresholds in 75% of cases. It also demonstrates strong adaptability to dynamic changes, including task additions and resource fluctuations. These findings highlight the potential of reinforcement learning for reliable, constraint-aware deployment in cloud-based IoT systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Concurrency and Computation-Practice & Experience
Concurrency and Computation-Practice & Experience 工程技术-计算机:理论方法
CiteScore
5.00
自引率
10.00%
发文量
664
审稿时长
9.6 months
期刊介绍: Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of: Parallel and distributed computing; High-performance computing; Computational and data science; Artificial intelligence and machine learning; Big data applications, algorithms, and systems; Network science; Ontologies and semantics; Security and privacy; Cloud/edge/fog computing; Green computing; and Quantum computing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信