Jun Zheng , Runda Jia , Shaoning Liu , Ranmeng Lin , Dakuo He , Fuli Wang
{"title":"Offline-to-online reinforcement learning with efficient unconstrained fine-tuning","authors":"Jun Zheng , Runda Jia , Shaoning Liu , Ranmeng Lin , Dakuo He , Fuli Wang","doi":"10.1016/j.neunet.2025.108120","DOIUrl":null,"url":null,"abstract":"<div><div>Offline reinforcement learning provides the capability to learn a policy only from pre-collected datasets, but its performance is often limited by the quality of the offline dataset and the coverage of the state-action space. Offline-to-online reinforcement learning is promising to address these limitations and achieve high sample efficiency by integrating the advantages of both offline and online learning paradigms. However, existing methods typically struggle to adapt to online learning and improve the performance of pre-trained policies due to the distributional shift and conservative training. To address these issues, we propose an efficient unconstrained fine-tuning framework that removes conservative constraints on the policy during fine-tuning, allowing thorough exploration of state-action pairs not covered by the offline data. This framework leverages three key techniques: dynamics representation learning, layer normalization, and increasing the update frequency of the value network to improve sample efficiency and mitigate value function estimation bias caused by the distributional shift. Dynamics representation learning accelerates fine-tuning by capturing meaningful features, layer normalization bounds <span><math><mi>Q</mi></math></span>-value to suppress catastrophic value function divergence, and increasing the update frequency of the value network enhances the sample efficiency and reduces value function estimation bias. Extensive experiments on the D4RL benchmark demonstrate that our algorithm outperforms state-of-the-art offline-to-online reinforcement learning algorithms across various tasks with minimal online interactions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108120"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010007","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Offline reinforcement learning provides the capability to learn a policy only from pre-collected datasets, but its performance is often limited by the quality of the offline dataset and the coverage of the state-action space. Offline-to-online reinforcement learning is promising to address these limitations and achieve high sample efficiency by integrating the advantages of both offline and online learning paradigms. However, existing methods typically struggle to adapt to online learning and improve the performance of pre-trained policies due to the distributional shift and conservative training. To address these issues, we propose an efficient unconstrained fine-tuning framework that removes conservative constraints on the policy during fine-tuning, allowing thorough exploration of state-action pairs not covered by the offline data. This framework leverages three key techniques: dynamics representation learning, layer normalization, and increasing the update frequency of the value network to improve sample efficiency and mitigate value function estimation bias caused by the distributional shift. Dynamics representation learning accelerates fine-tuning by capturing meaningful features, layer normalization bounds -value to suppress catastrophic value function divergence, and increasing the update frequency of the value network enhances the sample efficiency and reduces value function estimation bias. Extensive experiments on the D4RL benchmark demonstrate that our algorithm outperforms state-of-the-art offline-to-online reinforcement learning algorithms across various tasks with minimal online interactions.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.