{"title":"Reasoning Graph Enhanced Exemplars Retrieval for In-Context Learning","authors":"Yukang Lin, Bingchen Zhong, Shuoran Jiang, Joanna Siebert, Qingcai Chen","doi":"arxiv-2409.11147","DOIUrl":null,"url":null,"abstract":"Large language models(LLMs) have exhibited remarkable few-shot learning\ncapabilities and unified the paradigm of NLP tasks through the in-context\nlearning(ICL) technique. Despite the success of ICL, the quality of the\nexemplar demonstrations can significantly influence the LLM's performance.\nExisting exemplar selection methods mainly focus on the semantic similarity\nbetween queries and candidate exemplars. On the other hand, the logical\nconnections between reasoning steps can be beneficial to depict the\nproblem-solving process as well. In this paper, we proposes a novel method\nnamed Reasoning Graph-enhanced Exemplar Retrieval(RGER). RGER first quires LLM\nto generate an initial response, then expresses intermediate problem-solving\nsteps to a graph structure. After that, it employs graph kernel to select\nexemplars with semantic and structural similarity. Extensive experiments\ndemonstrate the structural relationship is helpful to the alignment of queries\nand candidate exemplars. The efficacy of RGER on math and logit reasoning tasks\nshowcases its superiority over state-of-the-art retrieval-based approaches. Our\ncode is released at https://github.com/Yukang-Lin/RGER.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11147","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models(LLMs) have exhibited remarkable few-shot learning
capabilities and unified the paradigm of NLP tasks through the in-context
learning(ICL) technique. Despite the success of ICL, the quality of the
exemplar demonstrations can significantly influence the LLM's performance.
Existing exemplar selection methods mainly focus on the semantic similarity
between queries and candidate exemplars. On the other hand, the logical
connections between reasoning steps can be beneficial to depict the
problem-solving process as well. In this paper, we proposes a novel method
named Reasoning Graph-enhanced Exemplar Retrieval(RGER). RGER first quires LLM
to generate an initial response, then expresses intermediate problem-solving
steps to a graph structure. After that, it employs graph kernel to select
exemplars with semantic and structural similarity. Extensive experiments
demonstrate the structural relationship is helpful to the alignment of queries
and candidate exemplars. The efficacy of RGER on math and logit reasoning tasks
showcases its superiority over state-of-the-art retrieval-based approaches. Our
code is released at https://github.com/Yukang-Lin/RGER.