{"title":"二进制代码嵌入的差异对比学习","authors":"Yun Zhang , Ge Cheng","doi":"10.1016/j.infsof.2025.107822","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>Binary code embedding plays a crucial role in binary similarity detection and software security analysis. However, conventional methods often suffer from scalability issues and depend heavily on large amounts of labeled data, limiting their practical deployment in real-world scenarios.</div></div><div><h3>Objectives:</h3><div>This research introduces DiffBCE, a novel binary code embedding method based on differential contrastive learning. The primary goal is to overcome the limitations of existing approaches by reducing the reliance on labeled data while enhancing the robustness and semantic sensitivity of binary code representations.</div></div><div><h3>Methods:</h3><div>DiffBCE integrates two complementary data augmentation strategies – insensitive transformations (implemented via dropout) and sensitive transformations (using instruction replacement with a Masked Language Model) – within a contrastive learning framework. In addition, a conditional difference prediction module is introduced to capture subtle semantic changes by identifying differences between original and transformed binary code. The model is jointly trained with a combined loss function balancing contrastive loss and conditional difference prediction loss. Experimental validation is performed on multiple binary datasets across various scenarios, including cross-version analysis, cross-optimization-level evaluation, and code obfuscation difference analysis.</div></div><div><h3>Results:</h3><div>Experimental evaluations demonstrate that DiffBCE significantly outperforms state of-the-art methods (e.g., Asm2Vec, DeepBinDiff, PalmTree). Across three similarity detection scenarios, the method achieves improvements in F1 scores by approximately 3.8%, 5.6%, and 11.1%, respectively, underscoring its robustness and effectiveness in handling complex binary code differences.</div></div><div><h3>Conclusions:</h3><div>DiffBCE offers a scalable and efficient solution for binary code embedding by effectively capturing rich semantic features without requiring extensive labeled data. Its superior performance in various testing scenarios suggests promising applications in vulnerability detection, code reuse analysis, reverse engineering, and automated patch generation, paving the way for enhanced software security assessments.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"187 ","pages":"Article 107822"},"PeriodicalIF":3.8000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DiffBCE: Difference contrastive learning for binary code embeddings\",\"authors\":\"Yun Zhang , Ge Cheng\",\"doi\":\"10.1016/j.infsof.2025.107822\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Context:</h3><div>Binary code embedding plays a crucial role in binary similarity detection and software security analysis. However, conventional methods often suffer from scalability issues and depend heavily on large amounts of labeled data, limiting their practical deployment in real-world scenarios.</div></div><div><h3>Objectives:</h3><div>This research introduces DiffBCE, a novel binary code embedding method based on differential contrastive learning. The primary goal is to overcome the limitations of existing approaches by reducing the reliance on labeled data while enhancing the robustness and semantic sensitivity of binary code representations.</div></div><div><h3>Methods:</h3><div>DiffBCE integrates two complementary data augmentation strategies – insensitive transformations (implemented via dropout) and sensitive transformations (using instruction replacement with a Masked Language Model) – within a contrastive learning framework. In addition, a conditional difference prediction module is introduced to capture subtle semantic changes by identifying differences between original and transformed binary code. The model is jointly trained with a combined loss function balancing contrastive loss and conditional difference prediction loss. Experimental validation is performed on multiple binary datasets across various scenarios, including cross-version analysis, cross-optimization-level evaluation, and code obfuscation difference analysis.</div></div><div><h3>Results:</h3><div>Experimental evaluations demonstrate that DiffBCE significantly outperforms state of-the-art methods (e.g., Asm2Vec, DeepBinDiff, PalmTree). Across three similarity detection scenarios, the method achieves improvements in F1 scores by approximately 3.8%, 5.6%, and 11.1%, respectively, underscoring its robustness and effectiveness in handling complex binary code differences.</div></div><div><h3>Conclusions:</h3><div>DiffBCE offers a scalable and efficient solution for binary code embedding by effectively capturing rich semantic features without requiring extensive labeled data. Its superior performance in various testing scenarios suggests promising applications in vulnerability detection, code reuse analysis, reverse engineering, and automated patch generation, paving the way for enhanced software security assessments.</div></div>\",\"PeriodicalId\":54983,\"journal\":{\"name\":\"Information and Software Technology\",\"volume\":\"187 \",\"pages\":\"Article 107822\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information and Software Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950584925001612\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584925001612","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
DiffBCE: Difference contrastive learning for binary code embeddings
Context:
Binary code embedding plays a crucial role in binary similarity detection and software security analysis. However, conventional methods often suffer from scalability issues and depend heavily on large amounts of labeled data, limiting their practical deployment in real-world scenarios.
Objectives:
This research introduces DiffBCE, a novel binary code embedding method based on differential contrastive learning. The primary goal is to overcome the limitations of existing approaches by reducing the reliance on labeled data while enhancing the robustness and semantic sensitivity of binary code representations.
Methods:
DiffBCE integrates two complementary data augmentation strategies – insensitive transformations (implemented via dropout) and sensitive transformations (using instruction replacement with a Masked Language Model) – within a contrastive learning framework. In addition, a conditional difference prediction module is introduced to capture subtle semantic changes by identifying differences between original and transformed binary code. The model is jointly trained with a combined loss function balancing contrastive loss and conditional difference prediction loss. Experimental validation is performed on multiple binary datasets across various scenarios, including cross-version analysis, cross-optimization-level evaluation, and code obfuscation difference analysis.
Results:
Experimental evaluations demonstrate that DiffBCE significantly outperforms state of-the-art methods (e.g., Asm2Vec, DeepBinDiff, PalmTree). Across three similarity detection scenarios, the method achieves improvements in F1 scores by approximately 3.8%, 5.6%, and 11.1%, respectively, underscoring its robustness and effectiveness in handling complex binary code differences.
Conclusions:
DiffBCE offers a scalable and efficient solution for binary code embedding by effectively capturing rich semantic features without requiring extensive labeled data. Its superior performance in various testing scenarios suggests promising applications in vulnerability detection, code reuse analysis, reverse engineering, and automated patch generation, paving the way for enhanced software security assessments.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.