Jintian Zhang, Cheng Peng, Mengshu Sun, Xiang Chen, Lei Liang, Zhiqiang Zhang, Jun Zhou, Huajun Chen, Ningyu Zhang
{"title":"OneGen:LLM 的高效单程统一生成和检索","authors":"Jintian Zhang, Cheng Peng, Mengshu Sun, Xiang Chen, Lei Liang, Zhiqiang Zhang, Jun Zhou, Huajun Chen, Ningyu Zhang","doi":"arxiv-2409.05152","DOIUrl":null,"url":null,"abstract":"Despite the recent advancements in Large Language Models (LLMs), which have\nsignificantly enhanced the generative capabilities for various NLP tasks, LLMs\nstill face limitations in directly handling retrieval tasks. However, many\npractical applications demand the seamless integration of both retrieval and\ngeneration. This paper introduces a novel and efficient One-pass Generation and\nretrieval framework (OneGen), designed to improve LLMs' performance on tasks\nthat require both generation and retrieval. The proposed framework bridges the\ntraditionally separate training approaches for generation and retrieval by\nincorporating retrieval tokens generated autoregressively. This enables a\nsingle LLM to handle both tasks simultaneously in a unified forward pass. We\nconduct experiments on two distinct types of composite tasks, RAG and Entity\nLinking, to validate the pluggability, effectiveness, and efficiency of OneGen\nin training and inference. Furthermore, our results show that integrating\ngeneration and retrieval within the same context preserves the generative\ncapabilities of LLMs while improving retrieval performance. To the best of our\nknowledge, OneGen is the first to enable LLMs to conduct vector retrieval\nduring the generation.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"67 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs\",\"authors\":\"Jintian Zhang, Cheng Peng, Mengshu Sun, Xiang Chen, Lei Liang, Zhiqiang Zhang, Jun Zhou, Huajun Chen, Ningyu Zhang\",\"doi\":\"arxiv-2409.05152\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite the recent advancements in Large Language Models (LLMs), which have\\nsignificantly enhanced the generative capabilities for various NLP tasks, LLMs\\nstill face limitations in directly handling retrieval tasks. However, many\\npractical applications demand the seamless integration of both retrieval and\\ngeneration. This paper introduces a novel and efficient One-pass Generation and\\nretrieval framework (OneGen), designed to improve LLMs' performance on tasks\\nthat require both generation and retrieval. The proposed framework bridges the\\ntraditionally separate training approaches for generation and retrieval by\\nincorporating retrieval tokens generated autoregressively. This enables a\\nsingle LLM to handle both tasks simultaneously in a unified forward pass. We\\nconduct experiments on two distinct types of composite tasks, RAG and Entity\\nLinking, to validate the pluggability, effectiveness, and efficiency of OneGen\\nin training and inference. Furthermore, our results show that integrating\\ngeneration and retrieval within the same context preserves the generative\\ncapabilities of LLMs while improving retrieval performance. To the best of our\\nknowledge, OneGen is the first to enable LLMs to conduct vector retrieval\\nduring the generation.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"67 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.05152\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05152","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs
Despite the recent advancements in Large Language Models (LLMs), which have
significantly enhanced the generative capabilities for various NLP tasks, LLMs
still face limitations in directly handling retrieval tasks. However, many
practical applications demand the seamless integration of both retrieval and
generation. This paper introduces a novel and efficient One-pass Generation and
retrieval framework (OneGen), designed to improve LLMs' performance on tasks
that require both generation and retrieval. The proposed framework bridges the
traditionally separate training approaches for generation and retrieval by
incorporating retrieval tokens generated autoregressively. This enables a
single LLM to handle both tasks simultaneously in a unified forward pass. We
conduct experiments on two distinct types of composite tasks, RAG and Entity
Linking, to validate the pluggability, effectiveness, and efficiency of OneGen
in training and inference. Furthermore, our results show that integrating
generation and retrieval within the same context preserves the generative
capabilities of LLMs while improving retrieval performance. To the best of our
knowledge, OneGen is the first to enable LLMs to conduct vector retrieval
during the generation.