渐进式离散化生成检索:一种高质量文档生成的自监督方法

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Shunyu Yao , Jie Hu , Zhiyuan Zhang , Dan Liu
{"title":"渐进式离散化生成检索:一种高质量文档生成的自监督方法","authors":"Shunyu Yao ,&nbsp;Jie Hu ,&nbsp;Zhiyuan Zhang ,&nbsp;Dan Liu","doi":"10.1016/j.neunet.2025.107663","DOIUrl":null,"url":null,"abstract":"<div><div>Generative retrieval is a novel retrieval paradigm where large language models serve as differentiable indices to memorize and retrieve candidate documents in a generative fashion. This paradigm overcomes the limitation that documents and queries must be encoded separately and demonstrates superior performance compared to traditional retrieval methods. To support the retrieval of large-scale corpora, extensive research has been devoted to devising a discrete and distinguishable document representation, namely the DocID. However, most DocIDs are built under unsupervised circumstances, where uncontrollable information distortion will be introduced during the discretization stage. In this work, we propose the <strong>S</strong>elf-supervised <strong>P</strong>rogressive <strong>D</strong>iscretization framework (SPD). SPD first distills document information into multi-perspective continuous representations in a self-supervised way. Then, a progressive discretization algorithm is employed to transform the continuous representations into approximate vectors and discrete DocIDs. The self-supervised model, approximate vectors, and DocIDs are further integrated into a query-side training pipeline to produce an effective generative retriever. Experiments on popular benchmarks demonstrate that SPD builds high-quality search-oriented DocIDs that achieve state-of-the-art generative retrieval performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107663"},"PeriodicalIF":6.0000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Progressive discretization for generative retrieval: A self-supervised approach to high-quality DocID generation\",\"authors\":\"Shunyu Yao ,&nbsp;Jie Hu ,&nbsp;Zhiyuan Zhang ,&nbsp;Dan Liu\",\"doi\":\"10.1016/j.neunet.2025.107663\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Generative retrieval is a novel retrieval paradigm where large language models serve as differentiable indices to memorize and retrieve candidate documents in a generative fashion. This paradigm overcomes the limitation that documents and queries must be encoded separately and demonstrates superior performance compared to traditional retrieval methods. To support the retrieval of large-scale corpora, extensive research has been devoted to devising a discrete and distinguishable document representation, namely the DocID. However, most DocIDs are built under unsupervised circumstances, where uncontrollable information distortion will be introduced during the discretization stage. In this work, we propose the <strong>S</strong>elf-supervised <strong>P</strong>rogressive <strong>D</strong>iscretization framework (SPD). SPD first distills document information into multi-perspective continuous representations in a self-supervised way. Then, a progressive discretization algorithm is employed to transform the continuous representations into approximate vectors and discrete DocIDs. The self-supervised model, approximate vectors, and DocIDs are further integrated into a query-side training pipeline to produce an effective generative retriever. Experiments on popular benchmarks demonstrate that SPD builds high-quality search-oriented DocIDs that achieve state-of-the-art generative retrieval performance.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"190 \",\"pages\":\"Article 107663\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S089360802500543X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089360802500543X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

生成式检索是一种新的检索范式,其中大型语言模型作为可微索引,以生成方式记忆和检索候选文档。这种范式克服了文档和查询必须单独编码的限制,并且与传统检索方法相比,表现出优越的性能。为了支持大规模语料库的检索,广泛的研究致力于设计一个离散的和可区分的文档表示,即DocID。然而,大多数docid都是在无监督的情况下建立的,在离散化阶段会引入不可控的信息失真。在这项工作中,我们提出了自监督渐进离散化框架(SPD)。SPD首先以自监督的方式将文档信息提取成多角度连续表示。然后,采用渐进式离散化算法将连续表示转化为近似向量和离散docid。将自监督模型、近似向量和docid进一步集成到查询端训练管道中,以产生有效的生成检索器。在流行基准测试上的实验表明,SPD构建了高质量的面向搜索的docid,实现了最先进的生成检索性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Progressive discretization for generative retrieval: A self-supervised approach to high-quality DocID generation
Generative retrieval is a novel retrieval paradigm where large language models serve as differentiable indices to memorize and retrieve candidate documents in a generative fashion. This paradigm overcomes the limitation that documents and queries must be encoded separately and demonstrates superior performance compared to traditional retrieval methods. To support the retrieval of large-scale corpora, extensive research has been devoted to devising a discrete and distinguishable document representation, namely the DocID. However, most DocIDs are built under unsupervised circumstances, where uncontrollable information distortion will be introduced during the discretization stage. In this work, we propose the Self-supervised Progressive Discretization framework (SPD). SPD first distills document information into multi-perspective continuous representations in a self-supervised way. Then, a progressive discretization algorithm is employed to transform the continuous representations into approximate vectors and discrete DocIDs. The self-supervised model, approximate vectors, and DocIDs are further integrated into a query-side training pipeline to produce an effective generative retriever. Experiments on popular benchmarks demonstrate that SPD builds high-quality search-oriented DocIDs that achieve state-of-the-art generative retrieval performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信