Teng Zhou, Kui Liu, Li Li, Zhe Liu, Jacques Klein, Tegawendé F. Bissyandé
{"title":"SmartGift:学习生成测试智能合约的实际输入","authors":"Teng Zhou, Kui Liu, Li Li, Zhe Liu, Jacques Klein, Tegawendé F. Bissyandé","doi":"10.1109/ICSME52107.2021.00009","DOIUrl":null,"url":null,"abstract":"With the boom of Initial Coin Offerings (ICO) in the financial markets, smart contracts have gained rapid popularity among consumers. Smart contract vulnerabilities however made them a prime target to malicious attacks that are leading to huge losses. The research community is thus applying various software engineering technologies to smart contracts to address them. In general, to detect vulnerabilities in smart contracts, mutation and fuzz based testing approaches have been widely studied and indeed achieved promising performance on benchmark datasets. Generating test inputs with mutation approaches essentially relies on the available test cases in a smart contract program. In our preliminary study, however, we observed that 56.4% of 218 identified open-source smart contract project repositories do not provide any test case for validation. Fuzzing test inputs leads to random values and lacks practical usefulness. Our work addresses this problem: we propose an approach, Smartgift, which generates practical inputs for testing smart contracts by learning from the transaction records of real-world smart contracts. Leveraging a collected set of over 60 thousand transaction records, Smartgift is able to generate relevant test inputs for ~77% smart contract functions, largely outperforming the traditional fuzzing approach (successful for only 60% functions). We further demonstrate the practicality of the test inputs by using them to replace the test inputs of the ContractFuzzer state of the art smart contract vulnerability detector: with inputs by Smartgift, ContractFuzzer can now detect 131 of the 154 vulnerabilities in its benchmark.","PeriodicalId":205629,"journal":{"name":"2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"SmartGift: Learning to Generate Practical Inputs for Testing Smart Contracts\",\"authors\":\"Teng Zhou, Kui Liu, Li Li, Zhe Liu, Jacques Klein, Tegawendé F. Bissyandé\",\"doi\":\"10.1109/ICSME52107.2021.00009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the boom of Initial Coin Offerings (ICO) in the financial markets, smart contracts have gained rapid popularity among consumers. Smart contract vulnerabilities however made them a prime target to malicious attacks that are leading to huge losses. The research community is thus applying various software engineering technologies to smart contracts to address them. In general, to detect vulnerabilities in smart contracts, mutation and fuzz based testing approaches have been widely studied and indeed achieved promising performance on benchmark datasets. Generating test inputs with mutation approaches essentially relies on the available test cases in a smart contract program. In our preliminary study, however, we observed that 56.4% of 218 identified open-source smart contract project repositories do not provide any test case for validation. Fuzzing test inputs leads to random values and lacks practical usefulness. Our work addresses this problem: we propose an approach, Smartgift, which generates practical inputs for testing smart contracts by learning from the transaction records of real-world smart contracts. Leveraging a collected set of over 60 thousand transaction records, Smartgift is able to generate relevant test inputs for ~77% smart contract functions, largely outperforming the traditional fuzzing approach (successful for only 60% functions). We further demonstrate the practicality of the test inputs by using them to replace the test inputs of the ContractFuzzer state of the art smart contract vulnerability detector: with inputs by Smartgift, ContractFuzzer can now detect 131 of the 154 vulnerabilities in its benchmark.\",\"PeriodicalId\":205629,\"journal\":{\"name\":\"2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSME52107.2021.00009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSME52107.2021.00009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SmartGift: Learning to Generate Practical Inputs for Testing Smart Contracts
With the boom of Initial Coin Offerings (ICO) in the financial markets, smart contracts have gained rapid popularity among consumers. Smart contract vulnerabilities however made them a prime target to malicious attacks that are leading to huge losses. The research community is thus applying various software engineering technologies to smart contracts to address them. In general, to detect vulnerabilities in smart contracts, mutation and fuzz based testing approaches have been widely studied and indeed achieved promising performance on benchmark datasets. Generating test inputs with mutation approaches essentially relies on the available test cases in a smart contract program. In our preliminary study, however, we observed that 56.4% of 218 identified open-source smart contract project repositories do not provide any test case for validation. Fuzzing test inputs leads to random values and lacks practical usefulness. Our work addresses this problem: we propose an approach, Smartgift, which generates practical inputs for testing smart contracts by learning from the transaction records of real-world smart contracts. Leveraging a collected set of over 60 thousand transaction records, Smartgift is able to generate relevant test inputs for ~77% smart contract functions, largely outperforming the traditional fuzzing approach (successful for only 60% functions). We further demonstrate the practicality of the test inputs by using them to replace the test inputs of the ContractFuzzer state of the art smart contract vulnerability detector: with inputs by Smartgift, ContractFuzzer can now detect 131 of the 154 vulnerabilities in its benchmark.