大型语言模型在代码搜索查询扩展中的有效性研究

IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Xiangzheng Liu , Jianxun Liu , Guosheng Kang , Min Shi , Yi Liu , Yiming Yin
{"title":"大型语言模型在代码搜索查询扩展中的有效性研究","authors":"Xiangzheng Liu ,&nbsp;Jianxun Liu ,&nbsp;Guosheng Kang ,&nbsp;Min Shi ,&nbsp;Yi Liu ,&nbsp;Yiming Yin","doi":"10.1016/j.jss.2025.112582","DOIUrl":null,"url":null,"abstract":"<div><div>Language Models (LMs) are deep learning models trained on massive amounts of text data. One of their main advantages is their superior language understanding capabilities. This study explores the application of Large Language Models (LLMs) understanding capabilities in code search query expansion. To this end, we collected a query corpus from multiple data sources and trained multiple LMs (GPT2, BERT) on this query corpus using a self-supervised task. The trained LM models are then used to expand the input query. We evaluate the performance of these LLMs on the CodeSearchNet dataset using two state-of-the-art code search methods (GraphCodeBERT and CoCoSoda) and compare these LLMs with currently popular expansion methods. Experimental results show that LLM-based query expansion methods outperform existing query reformulation methods in most cases.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"230 ","pages":"Article 112582"},"PeriodicalIF":4.1000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the effectiveness of large language models for query expansion in code search\",\"authors\":\"Xiangzheng Liu ,&nbsp;Jianxun Liu ,&nbsp;Guosheng Kang ,&nbsp;Min Shi ,&nbsp;Yi Liu ,&nbsp;Yiming Yin\",\"doi\":\"10.1016/j.jss.2025.112582\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Language Models (LMs) are deep learning models trained on massive amounts of text data. One of their main advantages is their superior language understanding capabilities. This study explores the application of Large Language Models (LLMs) understanding capabilities in code search query expansion. To this end, we collected a query corpus from multiple data sources and trained multiple LMs (GPT2, BERT) on this query corpus using a self-supervised task. The trained LM models are then used to expand the input query. We evaluate the performance of these LLMs on the CodeSearchNet dataset using two state-of-the-art code search methods (GraphCodeBERT and CoCoSoda) and compare these LLMs with currently popular expansion methods. Experimental results show that LLM-based query expansion methods outperform existing query reformulation methods in most cases.</div></div>\",\"PeriodicalId\":51099,\"journal\":{\"name\":\"Journal of Systems and Software\",\"volume\":\"230 \",\"pages\":\"Article 112582\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems and Software\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0164121225002511\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121225002511","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

语言模型(LMs)是在大量文本数据上训练的深度学习模型。他们的主要优势之一是他们卓越的语言理解能力。本研究探讨了大型语言模型(llm)理解能力在代码搜索查询扩展中的应用。为此,我们从多个数据源中收集了一个查询语料库,并使用自监督任务在这个查询语料库上训练了多个lm (GPT2、BERT)。然后使用训练好的LM模型扩展输入查询。我们使用两种最先进的代码搜索方法(GraphCodeBERT和CoCoSoda)评估了这些llm在CodeSearchNet数据集上的性能,并将这些llm与当前流行的扩展方法进行了比较。实验结果表明,基于llm的查询扩展方法在大多数情况下优于现有的查询重构方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On the effectiveness of large language models for query expansion in code search
Language Models (LMs) are deep learning models trained on massive amounts of text data. One of their main advantages is their superior language understanding capabilities. This study explores the application of Large Language Models (LLMs) understanding capabilities in code search query expansion. To this end, we collected a query corpus from multiple data sources and trained multiple LMs (GPT2, BERT) on this query corpus using a self-supervised task. The trained LM models are then used to expand the input query. We evaluate the performance of these LLMs on the CodeSearchNet dataset using two state-of-the-art code search methods (GraphCodeBERT and CoCoSoda) and compare these LLMs with currently popular expansion methods. Experimental results show that LLM-based query expansion methods outperform existing query reformulation methods in most cases.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Systems and Software
Journal of Systems and Software 工程技术-计算机:理论方法
CiteScore
8.60
自引率
5.70%
发文量
193
审稿时长
16 weeks
期刊介绍: The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to: •Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution •Agile, model-driven, service-oriented, open source and global software development •Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems •Human factors and management concerns of software development •Data management and big data issues of software systems •Metrics and evaluation, data mining of software development resources •Business and economic aspects of software development processes The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信