{"title":"在基于云的物联网驱动的业务流程中平衡风险和成本的强化学习方法","authors":"Amina Ahmed Nacer, Mohammed Riyadh Abdmeziem","doi":"10.1002/cpe.70212","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Secure and cost-efficient deployment of Internet of Things (IoT)-driven business processes (BPs) in multicloud environments is a complex task, particularly under dynamic workloads and strict risk or budgetary constraints. We address this challenge by leveraging deep reinforcement learning (DRL) to determine optimal task-to-cloud allocations that comply with user-defined cost or risk thresholds. Our approach introduces a <i>Q</i>-learning model that integrates a confidentiality risk metric and a cost evaluation function into its learning process. Unlike traditional heuristics, the DRL agent adapts to evolving constraints through trial-and-error interaction with its environment. Experimental results on a diverse set of deployment configurations show that our approach achieves 25%–30% reductions in both risk and cost compared to heuristic baselines, while satisfying thresholds in 75% of cases. It also demonstrates strong adaptability to dynamic changes, including task additions and resource fluctuations. These findings highlight the potential of reinforcement learning for reliable, constraint-aware deployment in cloud-based IoT systems.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 21-22","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward a Reinforcement Learning Approach for Balancing Risk and Cost in Cloud-Based IoT-Driven Business Processes\",\"authors\":\"Amina Ahmed Nacer, Mohammed Riyadh Abdmeziem\",\"doi\":\"10.1002/cpe.70212\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Secure and cost-efficient deployment of Internet of Things (IoT)-driven business processes (BPs) in multicloud environments is a complex task, particularly under dynamic workloads and strict risk or budgetary constraints. We address this challenge by leveraging deep reinforcement learning (DRL) to determine optimal task-to-cloud allocations that comply with user-defined cost or risk thresholds. Our approach introduces a <i>Q</i>-learning model that integrates a confidentiality risk metric and a cost evaluation function into its learning process. Unlike traditional heuristics, the DRL agent adapts to evolving constraints through trial-and-error interaction with its environment. Experimental results on a diverse set of deployment configurations show that our approach achieves 25%–30% reductions in both risk and cost compared to heuristic baselines, while satisfying thresholds in 75% of cases. It also demonstrates strong adaptability to dynamic changes, including task additions and resource fluctuations. These findings highlight the potential of reinforcement learning for reliable, constraint-aware deployment in cloud-based IoT systems.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 21-22\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70212\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70212","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Toward a Reinforcement Learning Approach for Balancing Risk and Cost in Cloud-Based IoT-Driven Business Processes
Secure and cost-efficient deployment of Internet of Things (IoT)-driven business processes (BPs) in multicloud environments is a complex task, particularly under dynamic workloads and strict risk or budgetary constraints. We address this challenge by leveraging deep reinforcement learning (DRL) to determine optimal task-to-cloud allocations that comply with user-defined cost or risk thresholds. Our approach introduces a Q-learning model that integrates a confidentiality risk metric and a cost evaluation function into its learning process. Unlike traditional heuristics, the DRL agent adapts to evolving constraints through trial-and-error interaction with its environment. Experimental results on a diverse set of deployment configurations show that our approach achieves 25%–30% reductions in both risk and cost compared to heuristic baselines, while satisfying thresholds in 75% of cases. It also demonstrates strong adaptability to dynamic changes, including task additions and resource fluctuations. These findings highlight the potential of reinforcement learning for reliable, constraint-aware deployment in cloud-based IoT systems.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.