{"title":"Enhancing smart contract security: Leveraging pre‐trained language models for advanced vulnerability detection","authors":"Fei He, Fei Li, Peili Liang","doi":"10.1049/blc2.12072","DOIUrl":null,"url":null,"abstract":"The burgeoning interest in decentralized applications (Dapps), spurred by advancements in blockchain technology, underscores the critical role of smart contracts. However, many Dapp users, often without deep knowledge of smart contracts, face financial risks due to hidden vulnerabilities. Traditional methods for detecting these vulnerabilities, including manual inspections and automated static analysis, are plagued by issues such as high rates of false positives and overlooked security flaws. To combat this, the article introduces an innovative approach using the bidirectional encoder representations from transformers (BERT)‐ATT‐BiLSTM model for identifying potential weaknesses in smart contracts. This method leverages the BERT pre‐trained model to discern semantic features from contract opcodes, which are then refined using a Bidirectional Long Short‐Term Memory Network (BiLSTM) and augmented by an attention mechanism that prioritizes critical features. The goal is to improve the model's generalization ability and enhance detection accuracy. Experiments on various publicly available smart contract datasets confirm the model's superior performance, outperforming previous methods in key metrics like accuracy, F1‐score, and recall. This research not only offers a powerful tool to bolster smart contract security, mitigating financial risks for average users, but also serves as a valuable reference for advancements in natural language processing and deep learning.","PeriodicalId":100650,"journal":{"name":"IET Blockchain","volume":"44 16","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Blockchain","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.1049/blc2.12072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The burgeoning interest in decentralized applications (Dapps), spurred by advancements in blockchain technology, underscores the critical role of smart contracts. However, many Dapp users, often without deep knowledge of smart contracts, face financial risks due to hidden vulnerabilities. Traditional methods for detecting these vulnerabilities, including manual inspections and automated static analysis, are plagued by issues such as high rates of false positives and overlooked security flaws. To combat this, the article introduces an innovative approach using the bidirectional encoder representations from transformers (BERT)‐ATT‐BiLSTM model for identifying potential weaknesses in smart contracts. This method leverages the BERT pre‐trained model to discern semantic features from contract opcodes, which are then refined using a Bidirectional Long Short‐Term Memory Network (BiLSTM) and augmented by an attention mechanism that prioritizes critical features. The goal is to improve the model's generalization ability and enhance detection accuracy. Experiments on various publicly available smart contract datasets confirm the model's superior performance, outperforming previous methods in key metrics like accuracy, F1‐score, and recall. This research not only offers a powerful tool to bolster smart contract security, mitigating financial risks for average users, but also serves as a valuable reference for advancements in natural language processing and deep learning.