An empirical study of best practices for code pre-trained models on software engineering classification tasks

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yu Zhao, Lina Gong, Yaoshen Yu, Zhiqiu Huang, Mingqiang Wei
{"title":"An empirical study of best practices for code pre-trained models on software engineering classification tasks","authors":"Yu Zhao,&nbsp;Lina Gong,&nbsp;Yaoshen Yu,&nbsp;Zhiqiu Huang,&nbsp;Mingqiang Wei","doi":"10.1016/j.eswa.2025.126762","DOIUrl":null,"url":null,"abstract":"<div><div>Tackling code-specific classification challenges like detecting code vulnerabilities and identifying code clones is pivotal in software engineering (SE) practice. The utilization of pre-trained models (PTMs) from the natural language processing (NLP) field shows profound benefits in text classification by generating contextual token embeddings. Similarly, for code-specific classification tasks, there is a growing trend among researchers and practitioners to leverage code-oriented PTMs to create embeddings for code snippets or directly apply the code PTMs to the downstream tasks based on the pre-training and fine-tuning paradigm. Nonetheless, we observe that SE researchers and practitioners often treat the code and text in the same way as NLP strategies when employing these code PTMs. However, despite previous studies in the SE field indicating similarities between programming languages and natural languages, it may not be entirely appropriate for current researchers to directly apply NLP knowledge to assume similar behavior in code. Therefore, in order to derive best practices for researchers and practitioners to use code PTMs for SE classification tasks, we first conduct an empirical analysis on six distinct code PTMs, namely CodeBERT, StarEncoder, CodeT5, PLBART, CodeGPT, and CodeGen, across three architectural frameworks (encoder-only, decoder-only, and encoder–decoder) in the context of four SE classification tasks: code vulnerability detection, code clone identification, just-in-time defect prediction, and function docstring mismatch detection under two scenarios of code embedding and task model. Our findings reveal several insights on the use of code PTMs for code-specific classification tasks endeavors: (1) Emphasizing the vector representation of individual code tokens leads to better code embedding quality and task model performance than those generated through specific tokens techniques in both the code embedding scenario and task model scenario. (2) Larger-sized code PTMs do not necessarily lead to superior code embedding quality in the code embedding scenario and better task performance in the task model scenario. (3) Adopting the ways to handle code and text data same as the pre-training phrase cannot guarantee the acquisition of high-quality code embeddings in the code embedding scenario while in the task model scenario, it can most likely acquire better task performance.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"272 ","pages":"Article 126762"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425003847","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Tackling code-specific classification challenges like detecting code vulnerabilities and identifying code clones is pivotal in software engineering (SE) practice. The utilization of pre-trained models (PTMs) from the natural language processing (NLP) field shows profound benefits in text classification by generating contextual token embeddings. Similarly, for code-specific classification tasks, there is a growing trend among researchers and practitioners to leverage code-oriented PTMs to create embeddings for code snippets or directly apply the code PTMs to the downstream tasks based on the pre-training and fine-tuning paradigm. Nonetheless, we observe that SE researchers and practitioners often treat the code and text in the same way as NLP strategies when employing these code PTMs. However, despite previous studies in the SE field indicating similarities between programming languages and natural languages, it may not be entirely appropriate for current researchers to directly apply NLP knowledge to assume similar behavior in code. Therefore, in order to derive best practices for researchers and practitioners to use code PTMs for SE classification tasks, we first conduct an empirical analysis on six distinct code PTMs, namely CodeBERT, StarEncoder, CodeT5, PLBART, CodeGPT, and CodeGen, across three architectural frameworks (encoder-only, decoder-only, and encoder–decoder) in the context of four SE classification tasks: code vulnerability detection, code clone identification, just-in-time defect prediction, and function docstring mismatch detection under two scenarios of code embedding and task model. Our findings reveal several insights on the use of code PTMs for code-specific classification tasks endeavors: (1) Emphasizing the vector representation of individual code tokens leads to better code embedding quality and task model performance than those generated through specific tokens techniques in both the code embedding scenario and task model scenario. (2) Larger-sized code PTMs do not necessarily lead to superior code embedding quality in the code embedding scenario and better task performance in the task model scenario. (3) Adopting the ways to handle code and text data same as the pre-training phrase cannot guarantee the acquisition of high-quality code embeddings in the code embedding scenario while in the task model scenario, it can most likely acquire better task performance.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信