{"title":"A review of backdoor attacks and defenses in code large language models: Implications for security measures","authors":"Yubin Qu , Song Huang , Peng Nie","doi":"10.1016/j.infsof.2025.107707","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>Large Language Models (LLMS) have revolutionized software engineering by bridging human language understanding and complex problem solving. However, resource constraints often lead users to rely on open-source models or third-party platforms for training and prompt engineering, introducing significant security vulnerabilities.</div></div><div><h3>Objective:</h3><div>This study provides a comprehensive analysis of backdoor attacks targeting LLMS in software engineering, with a particular focus on fine-tuning methods. Our work addresses a critical gap in existing literature by proposing a novel three-category framework for backdoor attacks: full-parameter fine-tuning, parameter-efficient fine-tuning, and no-tuning attacks.</div></div><div><h3>Methods:</h3><div>We systematically reviewed existing studies and analyzed attack success rates across different methods. Full-parameter fine-tuning generally achieves high success rates but requires significant computational resources. Parameter-efficient fine-tuning offers comparable success rates with lower resource demands, while no-tuning attacks exhibit variable success rates depending on prompt design, posing unique challenges due to their minimal resource requirements.</div></div><div><h3>Results:</h3><div>Our findings underscore the evolving landscape of backdoor attacks, highlighting the shift towards more resource-efficient and stealthy methods. These trends emphasize the need for advanced detection mechanisms and robust defense strategies.</div></div><div><h3>Conclusion:</h3><div>By focusing on code-specific threats, this study provides unique insights into securing LLMS in software engineering. Our work lays the foundation for future research on developing sophisticated defense mechanisms and understanding stealthy backdoor attacks.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"182 ","pages":"Article 107707"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584925000461","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Context:
Large Language Models (LLMS) have revolutionized software engineering by bridging human language understanding and complex problem solving. However, resource constraints often lead users to rely on open-source models or third-party platforms for training and prompt engineering, introducing significant security vulnerabilities.
Objective:
This study provides a comprehensive analysis of backdoor attacks targeting LLMS in software engineering, with a particular focus on fine-tuning methods. Our work addresses a critical gap in existing literature by proposing a novel three-category framework for backdoor attacks: full-parameter fine-tuning, parameter-efficient fine-tuning, and no-tuning attacks.
Methods:
We systematically reviewed existing studies and analyzed attack success rates across different methods. Full-parameter fine-tuning generally achieves high success rates but requires significant computational resources. Parameter-efficient fine-tuning offers comparable success rates with lower resource demands, while no-tuning attacks exhibit variable success rates depending on prompt design, posing unique challenges due to their minimal resource requirements.
Results:
Our findings underscore the evolving landscape of backdoor attacks, highlighting the shift towards more resource-efficient and stealthy methods. These trends emphasize the need for advanced detection mechanisms and robust defense strategies.
Conclusion:
By focusing on code-specific threats, this study provides unique insights into securing LLMS in software engineering. Our work lays the foundation for future research on developing sophisticated defense mechanisms and understanding stealthy backdoor attacks.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.