基于 RAG 的情境响应预测系统问题解答

Sriram Veturi, Saurabh Vaichal, Nafis Irtiza Tripto, Reshma Lal Jagadheesh, Nian Yan
{"title":"基于 RAG 的情境响应预测系统问题解答","authors":"Sriram Veturi, Saurabh Vaichal, Nafis Irtiza Tripto, Reshma Lal Jagadheesh, Nian Yan","doi":"arxiv-2409.03708","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have shown versatility in various Natural\nLanguage Processing (NLP) tasks, including their potential as effective\nquestion-answering systems. However, to provide precise and relevant\ninformation in response to specific customer queries in industry settings, LLMs\nrequire access to a comprehensive knowledge base to avoid hallucinations.\nRetrieval Augmented Generation (RAG) emerges as a promising technique to\naddress this challenge. Yet, developing an accurate question-answering\nframework for real-world applications using RAG entails several challenges: 1)\ndata availability issues, 2) evaluating the quality of generated content, and\n3) the costly nature of human evaluation. In this paper, we introduce an\nend-to-end framework that employs LLMs with RAG capabilities for industry use\ncases. Given a customer query, the proposed system retrieves relevant knowledge\ndocuments and leverages them, along with previous chat history, to generate\nresponse suggestions for customer service agents in the contact centers of a\nmajor retail company. Through comprehensive automated and human evaluations, we\nshow that this solution outperforms the current BERT-based algorithms in\naccuracy and relevance. Our findings suggest that RAG-based LLMs can be an\nexcellent support to human customer service representatives by lightening their\nworkload.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RAG based Question-Answering for Contextual Response Prediction System\",\"authors\":\"Sriram Veturi, Saurabh Vaichal, Nafis Irtiza Tripto, Reshma Lal Jagadheesh, Nian Yan\",\"doi\":\"arxiv-2409.03708\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLMs) have shown versatility in various Natural\\nLanguage Processing (NLP) tasks, including their potential as effective\\nquestion-answering systems. However, to provide precise and relevant\\ninformation in response to specific customer queries in industry settings, LLMs\\nrequire access to a comprehensive knowledge base to avoid hallucinations.\\nRetrieval Augmented Generation (RAG) emerges as a promising technique to\\naddress this challenge. Yet, developing an accurate question-answering\\nframework for real-world applications using RAG entails several challenges: 1)\\ndata availability issues, 2) evaluating the quality of generated content, and\\n3) the costly nature of human evaluation. In this paper, we introduce an\\nend-to-end framework that employs LLMs with RAG capabilities for industry use\\ncases. Given a customer query, the proposed system retrieves relevant knowledge\\ndocuments and leverages them, along with previous chat history, to generate\\nresponse suggestions for customer service agents in the contact centers of a\\nmajor retail company. Through comprehensive automated and human evaluations, we\\nshow that this solution outperforms the current BERT-based algorithms in\\naccuracy and relevance. Our findings suggest that RAG-based LLMs can be an\\nexcellent support to human customer service representatives by lightening their\\nworkload.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.03708\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03708","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大语言模型(LLM)在各种自然语言处理(NLP)任务中显示出了多功能性,包括作为有效问题解答系统的潜力。然而,要在行业环境中针对特定客户的询问提供精确的相关信息,LLMs 需要访问一个全面的知识库,以避免产生幻觉。然而,使用 RAG 为现实世界的应用开发一个准确的问题解答框架需要面对几个挑战:1)数据可用性问题;2)评估生成内容的质量;3)人工评估成本高昂。在本文中,我们介绍了一个端到端框架,该框架针对行业用例采用了具有 RAG 功能的 LLM。给定一个客户查询,所提出的系统会检索相关的知识文档,并利用这些文档和以前的聊天记录,为一家大型零售公司联络中心的客服人员生成回复建议。通过全面的自动和人工评估,我们发现该解决方案在不准确性和相关性方面优于当前基于 BERT 的算法。我们的研究结果表明,基于 RAG 的 LLM 可以减轻人工客服代表的工作量,从而为他们提供出色的支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
RAG based Question-Answering for Contextual Response Prediction System
Large Language Models (LLMs) have shown versatility in various Natural Language Processing (NLP) tasks, including their potential as effective question-answering systems. However, to provide precise and relevant information in response to specific customer queries in industry settings, LLMs require access to a comprehensive knowledge base to avoid hallucinations. Retrieval Augmented Generation (RAG) emerges as a promising technique to address this challenge. Yet, developing an accurate question-answering framework for real-world applications using RAG entails several challenges: 1) data availability issues, 2) evaluating the quality of generated content, and 3) the costly nature of human evaluation. In this paper, we introduce an end-to-end framework that employs LLMs with RAG capabilities for industry use cases. Given a customer query, the proposed system retrieves relevant knowledge documents and leverages them, along with previous chat history, to generate response suggestions for customer service agents in the contact centers of a major retail company. Through comprehensive automated and human evaluations, we show that this solution outperforms the current BERT-based algorithms in accuracy and relevance. Our findings suggest that RAG-based LLMs can be an excellent support to human customer service representatives by lightening their workload.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信