Offline-to-online reinforcement learning with efficient unconstrained fine-tuning

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jun Zheng , Runda Jia , Shaoning Liu , Ranmeng Lin , Dakuo He , Fuli Wang
{"title":"Offline-to-online reinforcement learning with efficient unconstrained fine-tuning","authors":"Jun Zheng ,&nbsp;Runda Jia ,&nbsp;Shaoning Liu ,&nbsp;Ranmeng Lin ,&nbsp;Dakuo He ,&nbsp;Fuli Wang","doi":"10.1016/j.neunet.2025.108120","DOIUrl":null,"url":null,"abstract":"<div><div>Offline reinforcement learning provides the capability to learn a policy only from pre-collected datasets, but its performance is often limited by the quality of the offline dataset and the coverage of the state-action space. Offline-to-online reinforcement learning is promising to address these limitations and achieve high sample efficiency by integrating the advantages of both offline and online learning paradigms. However, existing methods typically struggle to adapt to online learning and improve the performance of pre-trained policies due to the distributional shift and conservative training. To address these issues, we propose an efficient unconstrained fine-tuning framework that removes conservative constraints on the policy during fine-tuning, allowing thorough exploration of state-action pairs not covered by the offline data. This framework leverages three key techniques: dynamics representation learning, layer normalization, and increasing the update frequency of the value network to improve sample efficiency and mitigate value function estimation bias caused by the distributional shift. Dynamics representation learning accelerates fine-tuning by capturing meaningful features, layer normalization bounds <span><math><mi>Q</mi></math></span>-value to suppress catastrophic value function divergence, and increasing the update frequency of the value network enhances the sample efficiency and reduces value function estimation bias. Extensive experiments on the D4RL benchmark demonstrate that our algorithm outperforms state-of-the-art offline-to-online reinforcement learning algorithms across various tasks with minimal online interactions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108120"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010007","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Offline reinforcement learning provides the capability to learn a policy only from pre-collected datasets, but its performance is often limited by the quality of the offline dataset and the coverage of the state-action space. Offline-to-online reinforcement learning is promising to address these limitations and achieve high sample efficiency by integrating the advantages of both offline and online learning paradigms. However, existing methods typically struggle to adapt to online learning and improve the performance of pre-trained policies due to the distributional shift and conservative training. To address these issues, we propose an efficient unconstrained fine-tuning framework that removes conservative constraints on the policy during fine-tuning, allowing thorough exploration of state-action pairs not covered by the offline data. This framework leverages three key techniques: dynamics representation learning, layer normalization, and increasing the update frequency of the value network to improve sample efficiency and mitigate value function estimation bias caused by the distributional shift. Dynamics representation learning accelerates fine-tuning by capturing meaningful features, layer normalization bounds Q-value to suppress catastrophic value function divergence, and increasing the update frequency of the value network enhances the sample efficiency and reduces value function estimation bias. Extensive experiments on the D4RL benchmark demonstrate that our algorithm outperforms state-of-the-art offline-to-online reinforcement learning algorithms across various tasks with minimal online interactions.
具有高效无约束微调的离线到在线强化学习
离线强化学习提供了仅从预先收集的数据集学习策略的能力,但其性能通常受到离线数据集的质量和状态-动作空间的覆盖范围的限制。离线到在线的强化学习有望通过整合离线和在线学习范式的优势来解决这些限制并实现高样本效率。然而,由于分布转移和保守训练,现有方法通常难以适应在线学习并提高预训练策略的性能。为了解决这些问题,我们提出了一个有效的无约束微调框架,该框架在微调过程中消除了对策略的保守约束,允许彻底探索离线数据未涵盖的状态-动作对。该框架利用了三个关键技术:动态表示学习、层归一化和增加价值网络的更新频率,以提高样本效率并减轻由分布移位引起的价值函数估计偏差。动态表示学习通过捕获有意义的特征、层归一化边界q值来抑制灾难性值函数发散、增加值网络的更新频率来提高样本效率和减少值函数估计偏差来加速微调。在D4RL基准上进行的大量实验表明,我们的算法在各种任务中以最小的在线交互优于最先进的离线到在线强化学习算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信