Maojia Song, Shang Hong Sim, Rishabh Bhardwaj, Hai Leong Chieu, Navonil Majumder, Soujanya Poria
{"title":"通过基础归因和学会拒绝来衡量和提高 RAG 中法律硕士的可信度","authors":"Maojia Song, Shang Hong Sim, Rishabh Bhardwaj, Hai Leong Chieu, Navonil Majumder, Soujanya Poria","doi":"arxiv-2409.11242","DOIUrl":null,"url":null,"abstract":"LLMs are an integral part of retrieval-augmented generation (RAG) systems.\nWhile many studies focus on evaluating the quality of end-to-end RAG systems,\nthere is a lack of research on understanding the appropriateness of an LLM for\nthe RAG task. Thus, we introduce a new metric, Trust-Score, that provides a\nholistic evaluation of the trustworthiness of LLMs in an RAG framework. We show\nthat various prompting methods, such as in-context learning, fail to adapt LLMs\neffectively to the RAG task. Thus, we propose Trust-Align, a framework to align\nLLMs for higher Trust-Score. LLaMA-3-8b, aligned with our method, significantly\noutperforms open-source LLMs of comparable sizes on ASQA (up 10.7), QAMPARI (up\n29.2) and ELI5 (up 14.9). We release our code at:\nhttps://github.com/declare-lab/trust-align.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"50 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse\",\"authors\":\"Maojia Song, Shang Hong Sim, Rishabh Bhardwaj, Hai Leong Chieu, Navonil Majumder, Soujanya Poria\",\"doi\":\"arxiv-2409.11242\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"LLMs are an integral part of retrieval-augmented generation (RAG) systems.\\nWhile many studies focus on evaluating the quality of end-to-end RAG systems,\\nthere is a lack of research on understanding the appropriateness of an LLM for\\nthe RAG task. Thus, we introduce a new metric, Trust-Score, that provides a\\nholistic evaluation of the trustworthiness of LLMs in an RAG framework. We show\\nthat various prompting methods, such as in-context learning, fail to adapt LLMs\\neffectively to the RAG task. Thus, we propose Trust-Align, a framework to align\\nLLMs for higher Trust-Score. LLaMA-3-8b, aligned with our method, significantly\\noutperforms open-source LLMs of comparable sizes on ASQA (up 10.7), QAMPARI (up\\n29.2) and ELI5 (up 14.9). We release our code at:\\nhttps://github.com/declare-lab/trust-align.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"50 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11242\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11242","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
LLMs are an integral part of retrieval-augmented generation (RAG) systems.
While many studies focus on evaluating the quality of end-to-end RAG systems,
there is a lack of research on understanding the appropriateness of an LLM for
the RAG task. Thus, we introduce a new metric, Trust-Score, that provides a
holistic evaluation of the trustworthiness of LLMs in an RAG framework. We show
that various prompting methods, such as in-context learning, fail to adapt LLMs
effectively to the RAG task. Thus, we propose Trust-Align, a framework to align
LLMs for higher Trust-Score. LLaMA-3-8b, aligned with our method, significantly
outperforms open-source LLMs of comparable sizes on ASQA (up 10.7), QAMPARI (up
29.2) and ELI5 (up 14.9). We release our code at:
https://github.com/declare-lab/trust-align.