{"title":"Offline reward shaping with scaling human preference feedback for deep reinforcement learning","authors":"Jinfeng Li, Biao Luo, Xiaodong Xu, Tingwen Huang","doi":"10.1016/j.neunet.2024.106848","DOIUrl":null,"url":null,"abstract":"<div><div>Designing reward functions that fully align with human intent is often challenging. Preference-based Reinforcement Learning (PbRL) provides a framework where humans can select preferred segments through pairwise comparisons of behavior trajectory segments, facilitating reward function learning. However, existing methods collect non-dynamic preferences and struggle to provide accurate information about preference intensity. We propose scaling preference (SP) feedback method and qualitative and quantitative scaling preference (Q2SP) feedback method, which allow humans to express the true degree of preference between trajectories, thus helping reward learn more accurate human preferences from offline data. Our key insight is that more detailed feedback facilitates the learning of reward functions that better align with human intent. Experiments demonstrate that, across a range of control and robotic benchmark tasks, our methods are highly competitive compared to baselines and state of the art approaches.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106848"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089360802400772X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Designing reward functions that fully align with human intent is often challenging. Preference-based Reinforcement Learning (PbRL) provides a framework where humans can select preferred segments through pairwise comparisons of behavior trajectory segments, facilitating reward function learning. However, existing methods collect non-dynamic preferences and struggle to provide accurate information about preference intensity. We propose scaling preference (SP) feedback method and qualitative and quantitative scaling preference (Q2SP) feedback method, which allow humans to express the true degree of preference between trajectories, thus helping reward learn more accurate human preferences from offline data. Our key insight is that more detailed feedback facilitates the learning of reward functions that better align with human intent. Experiments demonstrate that, across a range of control and robotic benchmark tasks, our methods are highly competitive compared to baselines and state of the art approaches.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.