Optimizing RAG Techniques for Automotive Industry PDF Chatbots: A Case Study with Locally Deployed Ollama Models

Fei Liu, Zejun Kang, Xing Han
{"title":"Optimizing RAG Techniques for Automotive Industry PDF Chatbots: A Case Study with Locally Deployed Ollama Models","authors":"Fei Liu, Zejun Kang, Xing Han","doi":"arxiv-2408.05933","DOIUrl":null,"url":null,"abstract":"With the growing demand for offline PDF chatbots in automotive industrial\nproduction environments, optimizing the deployment of large language models\n(LLMs) in local, low-performance settings has become increasingly important.\nThis study focuses on enhancing Retrieval-Augmented Generation (RAG) techniques\nfor processing complex automotive industry documents using locally deployed\nOllama models. Based on the Langchain framework, we propose a multi-dimensional\noptimization approach for Ollama's local RAG implementation. Our method\naddresses key challenges in automotive document processing, including\nmulti-column layouts and technical specifications. We introduce improvements in\nPDF processing, retrieval mechanisms, and context compression, tailored to the\nunique characteristics of automotive industry documents. Additionally, we\ndesign custom classes supporting embedding pipelines and an agent supporting\nself-RAG based on LangGraph best practices. To evaluate our approach, we\nconstructed a proprietary dataset comprising typical automotive industry\ndocuments, including technical reports and corporate regulations. We compared\nour optimized RAG model and self-RAG agent against a naive RAG baseline across\nthree datasets: our automotive industry dataset, QReCC, and CoQA. Results\ndemonstrate significant improvements in context precision, context recall,\nanswer relevancy, and faithfulness, with particularly notable performance on\nthe automotive industry dataset. Our optimization scheme provides an effective\nsolution for deploying local RAG systems in the automotive sector, addressing\nthe specific needs of PDF chatbots in industrial production environments. This\nresearch has important implications for advancing information processing and\nintelligent production in the automotive industry.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.05933","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the growing demand for offline PDF chatbots in automotive industrial production environments, optimizing the deployment of large language models (LLMs) in local, low-performance settings has become increasingly important. This study focuses on enhancing Retrieval-Augmented Generation (RAG) techniques for processing complex automotive industry documents using locally deployed Ollama models. Based on the Langchain framework, we propose a multi-dimensional optimization approach for Ollama's local RAG implementation. Our method addresses key challenges in automotive document processing, including multi-column layouts and technical specifications. We introduce improvements in PDF processing, retrieval mechanisms, and context compression, tailored to the unique characteristics of automotive industry documents. Additionally, we design custom classes supporting embedding pipelines and an agent supporting self-RAG based on LangGraph best practices. To evaluate our approach, we constructed a proprietary dataset comprising typical automotive industry documents, including technical reports and corporate regulations. We compared our optimized RAG model and self-RAG agent against a naive RAG baseline across three datasets: our automotive industry dataset, QReCC, and CoQA. Results demonstrate significant improvements in context precision, context recall, answer relevancy, and faithfulness, with particularly notable performance on the automotive industry dataset. Our optimization scheme provides an effective solution for deploying local RAG systems in the automotive sector, addressing the specific needs of PDF chatbots in industrial production environments. This research has important implications for advancing information processing and intelligent production in the automotive industry.
优化汽车行业 PDF 聊天机器人的 RAG 技术:本地部署的 Ollama 模型案例研究
随着汽车工业生产环境中对离线 PDF 聊天机器人的需求日益增长,在本地、低性能环境中优化大型语言模型(LLM)的部署变得越来越重要。本研究的重点是利用本地部署的 Ollama 模型增强检索增强生成(RAG)技术,以处理复杂的汽车行业文档。基于 Langchain 框架,我们为 Ollama 的本地 RAG 实现提出了一种多维优化方法。我们的方法解决了汽车文档处理中的关键难题,包括多列布局和技术规范。我们针对汽车行业文档的独特性,在 PDF 处理、检索机制和上下文压缩方面进行了改进。此外,我们还设计了支持嵌入管道的自定义类,以及基于 LangGraph 最佳实践的支持自 RAG 的代理。为了评估我们的方法,我们构建了一个专有数据集,其中包括典型的汽车行业文档,包括技术报告和公司法规。我们在三个数据集(汽车行业数据集、QReCC 和 CoQA)上比较了我们的优化 RAG 模型和自 RAG 代理与原始 RAG 基线。结果表明,在上下文精确度、上下文召回率、答案相关性和忠实性方面都有显著提高,在汽车行业数据集上的表现尤为突出。我们的优化方案为在汽车行业部署本地 RAG 系统提供了有效的解决方案,满足了工业生产环境中 PDF 聊天机器人的特定需求。这项研究对推动汽车行业的信息处理和智能生产具有重要意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信