Jianing Wang, Yang Zhou, Xiaocheng Zhang, Mengjiao Bao, Peng Yan
{"title":"Self-Evolutionary Large Language Models through Uncertainty-Enhanced Preference Optimization","authors":"Jianing Wang, Yang Zhou, Xiaocheng Zhang, Mengjiao Bao, Peng Yan","doi":"arxiv-2409.11212","DOIUrl":null,"url":null,"abstract":"Iterative preference optimization has recently become one of the de-facto\ntraining paradigms for large language models (LLMs), but the performance is\nstill underwhelming due to too much noisy preference data yielded in the loop.\nTo combat this issue, we present an \\textbf{U}ncertainty-enhanced\n\\textbf{P}reference \\textbf{O}ptimization (UPO) framework to make the LLM\nself-evolve with reliable feedback. The key idea is mitigating the noisy\npreference data derived from the current policy and reward models by performing\npair-wise uncertainty estimation and judiciously reliable feedback sampling. To\nreach this goal, we thus introduce an estimator model, which incorporates Monte\nCarlo (MC) dropout in Bayesian neural network (BNN) to perform uncertainty\nestimation for the preference data derived from the LLM policy. Compared to the\nexisting methods that directly filter generated responses based on the reward\nscore, the estimator focuses on the model uncertainty in a pair-wise manner and\neffectively bypasses the confirmation bias problem of the reward model.\nAdditionally, we also propose an uncertainty-enhanced self-evolution algorithm\nto improve the robustness of preference optimization and encourage the LLM to\ngenerate responses with both high reward and certainty. Extensive experiments\nover multiple benchmarks demonstrate that our framework substantially\nalleviates the noisy problem and improves the performance of iterative\npreference optimization.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"91 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11212","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Iterative preference optimization has recently become one of the de-facto
training paradigms for large language models (LLMs), but the performance is
still underwhelming due to too much noisy preference data yielded in the loop.
To combat this issue, we present an \textbf{U}ncertainty-enhanced
\textbf{P}reference \textbf{O}ptimization (UPO) framework to make the LLM
self-evolve with reliable feedback. The key idea is mitigating the noisy
preference data derived from the current policy and reward models by performing
pair-wise uncertainty estimation and judiciously reliable feedback sampling. To
reach this goal, we thus introduce an estimator model, which incorporates Monte
Carlo (MC) dropout in Bayesian neural network (BNN) to perform uncertainty
estimation for the preference data derived from the LLM policy. Compared to the
existing methods that directly filter generated responses based on the reward
score, the estimator focuses on the model uncertainty in a pair-wise manner and
effectively bypasses the confirmation bias problem of the reward model.
Additionally, we also propose an uncertainty-enhanced self-evolution algorithm
to improve the robustness of preference optimization and encourage the LLM to
generate responses with both high reward and certainty. Extensive experiments
over multiple benchmarks demonstrate that our framework substantially
alleviates the noisy problem and improves the performance of iterative
preference optimization.