Xiangzheng Liu , Jianxun Liu , Guosheng Kang , Min Shi , Yi Liu , Yiming Yin
{"title":"大型语言模型在代码搜索查询扩展中的有效性研究","authors":"Xiangzheng Liu , Jianxun Liu , Guosheng Kang , Min Shi , Yi Liu , Yiming Yin","doi":"10.1016/j.jss.2025.112582","DOIUrl":null,"url":null,"abstract":"<div><div>Language Models (LMs) are deep learning models trained on massive amounts of text data. One of their main advantages is their superior language understanding capabilities. This study explores the application of Large Language Models (LLMs) understanding capabilities in code search query expansion. To this end, we collected a query corpus from multiple data sources and trained multiple LMs (GPT2, BERT) on this query corpus using a self-supervised task. The trained LM models are then used to expand the input query. We evaluate the performance of these LLMs on the CodeSearchNet dataset using two state-of-the-art code search methods (GraphCodeBERT and CoCoSoda) and compare these LLMs with currently popular expansion methods. Experimental results show that LLM-based query expansion methods outperform existing query reformulation methods in most cases.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"230 ","pages":"Article 112582"},"PeriodicalIF":4.1000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the effectiveness of large language models for query expansion in code search\",\"authors\":\"Xiangzheng Liu , Jianxun Liu , Guosheng Kang , Min Shi , Yi Liu , Yiming Yin\",\"doi\":\"10.1016/j.jss.2025.112582\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Language Models (LMs) are deep learning models trained on massive amounts of text data. One of their main advantages is their superior language understanding capabilities. This study explores the application of Large Language Models (LLMs) understanding capabilities in code search query expansion. To this end, we collected a query corpus from multiple data sources and trained multiple LMs (GPT2, BERT) on this query corpus using a self-supervised task. The trained LM models are then used to expand the input query. We evaluate the performance of these LLMs on the CodeSearchNet dataset using two state-of-the-art code search methods (GraphCodeBERT and CoCoSoda) and compare these LLMs with currently popular expansion methods. Experimental results show that LLM-based query expansion methods outperform existing query reformulation methods in most cases.</div></div>\",\"PeriodicalId\":51099,\"journal\":{\"name\":\"Journal of Systems and Software\",\"volume\":\"230 \",\"pages\":\"Article 112582\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems and Software\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0164121225002511\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121225002511","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
On the effectiveness of large language models for query expansion in code search
Language Models (LMs) are deep learning models trained on massive amounts of text data. One of their main advantages is their superior language understanding capabilities. This study explores the application of Large Language Models (LLMs) understanding capabilities in code search query expansion. To this end, we collected a query corpus from multiple data sources and trained multiple LMs (GPT2, BERT) on this query corpus using a self-supervised task. The trained LM models are then used to expand the input query. We evaluate the performance of these LLMs on the CodeSearchNet dataset using two state-of-the-art code search methods (GraphCodeBERT and CoCoSoda) and compare these LLMs with currently popular expansion methods. Experimental results show that LLM-based query expansion methods outperform existing query reformulation methods in most cases.
期刊介绍:
The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to:
•Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution
•Agile, model-driven, service-oriented, open source and global software development
•Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems
•Human factors and management concerns of software development
•Data management and big data issues of software systems
•Metrics and evaluation, data mining of software development resources
•Business and economic aspects of software development processes
The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.