Reranking Laws for Language Generation: A Communication-Theoretic Perspective

António Farinhas, Haau-Sing Li, André F. T. Martins
{"title":"Reranking Laws for Language Generation: A Communication-Theoretic Perspective","authors":"António Farinhas, Haau-Sing Li, André F. T. Martins","doi":"arxiv-2409.07131","DOIUrl":null,"url":null,"abstract":"To ensure large language models (LLMs) are used safely, one must reduce their\npropensity to hallucinate or to generate unacceptable answers. A simple and\noften used strategy is to first let the LLM generate multiple hypotheses and\nthen employ a reranker to choose the best one. In this paper, we draw a\nparallel between this strategy and the use of redundancy to decrease the error\nrate in noisy communication channels. We conceptualize the generator as a\nsender transmitting multiple descriptions of a message through parallel noisy\nchannels. The receiver decodes the message by ranking the (potentially\ncorrupted) descriptions and selecting the one found to be most reliable. We\nprovide conditions under which this protocol is asymptotically error-free\n(i.e., yields an acceptable answer almost surely) even in scenarios where the\nreranker is imperfect (governed by Mallows or Zipf-Mandelbrot models) and the\nchannel distributions are statistically dependent. We use our framework to\nobtain reranking laws which we validate empirically on two real-world tasks\nusing LLMs: text-to-code generation with DeepSeek-Coder 7B and machine\ntranslation of medical data with TowerInstruct 13B.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"45 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

To ensure large language models (LLMs) are used safely, one must reduce their propensity to hallucinate or to generate unacceptable answers. A simple and often used strategy is to first let the LLM generate multiple hypotheses and then employ a reranker to choose the best one. In this paper, we draw a parallel between this strategy and the use of redundancy to decrease the error rate in noisy communication channels. We conceptualize the generator as a sender transmitting multiple descriptions of a message through parallel noisy channels. The receiver decodes the message by ranking the (potentially corrupted) descriptions and selecting the one found to be most reliable. We provide conditions under which this protocol is asymptotically error-free (i.e., yields an acceptable answer almost surely) even in scenarios where the reranker is imperfect (governed by Mallows or Zipf-Mandelbrot models) and the channel distributions are statistically dependent. We use our framework to obtain reranking laws which we validate empirically on two real-world tasks using LLMs: text-to-code generation with DeepSeek-Coder 7B and machine translation of medical data with TowerInstruct 13B.
语言生成的重新排序法则:传播理论视角
为了确保大型语言模型(LLMs)的安全使用,我们必须降低它们产生幻觉或不可接受答案的可能性。一种简单且常用的策略是,首先让 LLM 生成多个假设,然后使用重排器选择最佳假设。在本文中,我们将这一策略与使用冗余来降低噪声通信信道中的错误率相提并论。我们将生成器概念化为通过并行噪声信道传输多个信息描述的发送者。接收者通过对(可能被破坏的)描述进行排序,并选择其中最可靠的描述来解码信息。我们提供了一些条件,在这些条件下,即使接收者不完美(受 Mallows 或 Zipf-Mandelbrot 模型支配),而且信道分布在统计学上具有依赖性,该协议也能近似无错(即几乎肯定能得到可接受的答案)。我们利用我们的框架获得了重排定律,并在两个使用 LLM 的实际任务中进行了实证验证:使用 DeepSeek-Coder 7B 进行文本到代码的生成,以及使用 TowerInstruct 13B 进行医疗数据的机器翻译。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信