Deep Reinforcement Learning Based on Search Space Independent Operators for Black-Box Continuous Optimization

IF 19.2 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Ieee-Caa Journal of Automatica Sinica Pub Date : 2026-04-01 Epub Date: 2026-04-30 DOI:10.1109/JAS.2025.125444
Ye Tian;Yisai Liu;Shangshang Yang;Xingyi Zhang
{"title":"Deep Reinforcement Learning Based on Search Space Independent Operators for Black-Box Continuous Optimization","authors":"Ye Tian;Yisai Liu;Shangshang Yang;Xingyi Zhang","doi":"10.1109/JAS.2025.125444","DOIUrl":null,"url":null,"abstract":"Deep reinforcement learning (DRL) has demonstrated exceptional capabilities in combinatorial optimization, which automatically devises policies for solution construction and optimizer refinement. DRL is particularly adept in generating training samples by itself, thereby providing the flexibility to solve a variety of combinatorial optimization problems without supervision. While DRL takes actions according to states extracted from problem-specific information, it cannot be directly applied to black-box continuous optimization lacking explicit information. To address this issue, this paper proposes a search space independent operator based DRL method for black-box continuous optimization. It conceptualizes the optimization process driven by search space independent operators as a Markov decision process, wherein actions are defined as operators and states are extracted from solutions generated by operators. In contrast to other DRL-assisted metaheuristics, the proposed method does not rely on any existing metaheuristic. Instead, it innovates by creating totally new operators, able to surpass the performance boundaries of existing metaheuristics. Compared with state-of-the-art meta-heuristics and DRL methods, the proposed method shows significantly faster convergence speed on challenging continuous optimization problems.","PeriodicalId":54230,"journal":{"name":"Ieee-Caa Journal of Automatica Sinica","volume":"13 4","pages":"913-925"},"PeriodicalIF":19.2000,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ieee-Caa Journal of Automatica Sinica","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11503187/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/4/30 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Deep reinforcement learning (DRL) has demonstrated exceptional capabilities in combinatorial optimization, which automatically devises policies for solution construction and optimizer refinement. DRL is particularly adept in generating training samples by itself, thereby providing the flexibility to solve a variety of combinatorial optimization problems without supervision. While DRL takes actions according to states extracted from problem-specific information, it cannot be directly applied to black-box continuous optimization lacking explicit information. To address this issue, this paper proposes a search space independent operator based DRL method for black-box continuous optimization. It conceptualizes the optimization process driven by search space independent operators as a Markov decision process, wherein actions are defined as operators and states are extracted from solutions generated by operators. In contrast to other DRL-assisted metaheuristics, the proposed method does not rely on any existing metaheuristic. Instead, it innovates by creating totally new operators, able to surpass the performance boundaries of existing metaheuristics. Compared with state-of-the-art meta-heuristics and DRL methods, the proposed method shows significantly faster convergence speed on challenging continuous optimization problems.
基于搜索空间无关算子的深度强化学习黑盒连续优化
深度强化学习(DRL)在组合优化方面表现出了卓越的能力,它可以自动设计解决方案构建和优化器优化的策略。DRL特别擅长自己生成训练样本,从而提供了在没有监督的情况下解决各种组合优化问题的灵活性。虽然DRL根据从问题特定信息中提取的状态采取行动,但由于缺乏明确的信息,它不能直接应用于黑盒连续优化。针对这一问题,本文提出了一种基于搜索空间无关算子的黑盒连续优化DRL方法。它将与搜索空间无关的算子驱动的优化过程概念化为马尔可夫决策过程,其中将动作定义为算子,并从算子生成的解中提取状态。与其他drl辅助的元启发式方法相比,该方法不依赖于任何现有的元启发式方法。相反,它通过创建全新的操作符进行创新,能够超越现有元启发式的性能界限。与最先进的元启发式方法和DRL方法相比,该方法在具有挑战性的连续优化问题上的收敛速度明显加快。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Ieee-Caa Journal of Automatica Sinica
Ieee-Caa Journal of Automatica Sinica Engineering-Control and Systems Engineering
CiteScore
23.50
自引率
11.00%
发文量
880
期刊介绍: The IEEE/CAA Journal of Automatica Sinica is a reputable journal that publishes high-quality papers in English on original theoretical/experimental research and development in the field of automation. The journal covers a wide range of topics including automatic control, artificial intelligence and intelligent control, systems theory and engineering, pattern recognition and intelligent systems, automation engineering and applications, information processing and information systems, network-based automation, robotics, sensing and measurement, and navigation, guidance, and control. Additionally, the journal is abstracted/indexed in several prominent databases including SCIE (Science Citation Index Expanded), EI (Engineering Index), Inspec, Scopus, SCImago, DBLP, CNKI (China National Knowledge Infrastructure), CSCD (Chinese Science Citation Database), and IEEE Xplore.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书