Yihang Zheng, Bo Li, Zhenghao Lin, Yi Luo, Xuanhe Zhou, Chen Lin, Jinsong Su, Guoliang Li, Shifu Li
{"title":"Revolutionizing Database Q&A with Large Language Models: Comprehensive Benchmark and Evaluation","authors":"Yihang Zheng, Bo Li, Zhenghao Lin, Yi Luo, Xuanhe Zhou, Chen Lin, Jinsong Su, Guoliang Li, Shifu Li","doi":"arxiv-2409.04475","DOIUrl":null,"url":null,"abstract":"The development of Large Language Models (LLMs) has revolutionized Q&A across\nvarious industries, including the database domain. However, there is still a\nlack of a comprehensive benchmark to evaluate the capabilities of different\nLLMs and their modular components in database Q&A. To this end, we introduce\nDQA, the first comprehensive database Q&A benchmark. DQA features an innovative\nLLM-based method for automating the generation, cleaning, and rewriting of\ndatabase Q&A, resulting in over 240,000 Q&A pairs in English and Chinese. These\nQ&A pairs cover nearly all aspects of database knowledge, including database\nmanuals, database blogs, and database tools. This inclusion allows for\nadditional assessment of LLMs' Retrieval-Augmented Generation (RAG) and Tool\nInvocation Generation (TIG) capabilities in the database Q&A task. Furthermore,\nwe propose a comprehensive LLM-based database Q&A testbed on DQA. This testbed\nis highly modular and scalable, with both basic and advanced components like\nQuestion Classification Routing (QCR), RAG, TIG, and Prompt Template\nEngineering (PTE). Besides, DQA provides a complete evaluation pipeline,\nfeaturing diverse metrics and a standardized evaluation process to ensure\ncomprehensiveness, accuracy, and fairness. We use DQA to evaluate the database\nQ&A capabilities under the proposed testbed comprehensively. The evaluation\nreveals findings like (i) the strengths and limitations of nine different\nLLM-based Q&A bots and (ii) the performance impact and potential improvements\nof various service components (e.g., QCR, RAG, TIG). We hope our benchmark and\nfindings will better guide the future development of LLM-based database Q&A\nresearch.","PeriodicalId":501123,"journal":{"name":"arXiv - CS - Databases","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Databases","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.04475","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The development of Large Language Models (LLMs) has revolutionized Q&A across
various industries, including the database domain. However, there is still a
lack of a comprehensive benchmark to evaluate the capabilities of different
LLMs and their modular components in database Q&A. To this end, we introduce
DQA, the first comprehensive database Q&A benchmark. DQA features an innovative
LLM-based method for automating the generation, cleaning, and rewriting of
database Q&A, resulting in over 240,000 Q&A pairs in English and Chinese. These
Q&A pairs cover nearly all aspects of database knowledge, including database
manuals, database blogs, and database tools. This inclusion allows for
additional assessment of LLMs' Retrieval-Augmented Generation (RAG) and Tool
Invocation Generation (TIG) capabilities in the database Q&A task. Furthermore,
we propose a comprehensive LLM-based database Q&A testbed on DQA. This testbed
is highly modular and scalable, with both basic and advanced components like
Question Classification Routing (QCR), RAG, TIG, and Prompt Template
Engineering (PTE). Besides, DQA provides a complete evaluation pipeline,
featuring diverse metrics and a standardized evaluation process to ensure
comprehensiveness, accuracy, and fairness. We use DQA to evaluate the database
Q&A capabilities under the proposed testbed comprehensively. The evaluation
reveals findings like (i) the strengths and limitations of nine different
LLM-based Q&A bots and (ii) the performance impact and potential improvements
of various service components (e.g., QCR, RAG, TIG). We hope our benchmark and
findings will better guide the future development of LLM-based database Q&A
research.