Learning Contract Invariants Using Reinforcement Learning

Junrui Liu, Yanju Chen, Bryan Tan, Işıl Dillig, Yu Feng
{"title":"Learning Contract Invariants Using Reinforcement Learning","authors":"Junrui Liu, Yanju Chen, Bryan Tan, Işıl Dillig, Yu Feng","doi":"10.1145/3551349.3556962","DOIUrl":null,"url":null,"abstract":"Due to the popularity of smart contracts in the modern financial ecosystem, there has been growing interest in formally verifying their correctness and security properties. Most existing techniques in this space focus on common vulnerabilities like arithmetic overflows and perform verification by leveraging contract invariants (i.e., logical formulas hold at transaction boundaries). In this paper, we propose a new technique, based on deep reinforcement learning, for automatically learning contract invariants that are useful for proving arithmetic safety. Our method incorporates an off-line training phase in which the verifier uses its own verification attempts to learn a policy for contract invariant generation. This learned (neural) policy is then used at verification time to predict likely invariants that are also useful for proving arithmetic safety. We implemented this idea in a tool called Cider and incorporated it into an existing verifier (based on refinement type checking) for proving arithmetic safety. Our evaluation shows that Cider improves both the quality of the inferred invariants as well as inference time, leading to faster verification and hardened contracts with fewer run-time assertions.","PeriodicalId":197939,"journal":{"name":"Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3551349.3556962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Due to the popularity of smart contracts in the modern financial ecosystem, there has been growing interest in formally verifying their correctness and security properties. Most existing techniques in this space focus on common vulnerabilities like arithmetic overflows and perform verification by leveraging contract invariants (i.e., logical formulas hold at transaction boundaries). In this paper, we propose a new technique, based on deep reinforcement learning, for automatically learning contract invariants that are useful for proving arithmetic safety. Our method incorporates an off-line training phase in which the verifier uses its own verification attempts to learn a policy for contract invariant generation. This learned (neural) policy is then used at verification time to predict likely invariants that are also useful for proving arithmetic safety. We implemented this idea in a tool called Cider and incorporated it into an existing verifier (based on refinement type checking) for proving arithmetic safety. Our evaluation shows that Cider improves both the quality of the inferred invariants as well as inference time, leading to faster verification and hardened contracts with fewer run-time assertions.
使用强化学习学习契约不变量
由于智能合约在现代金融生态系统中的普及,人们对正式验证其正确性和安全性的兴趣越来越大。该领域的大多数现有技术都关注于算术溢出等常见漏洞,并通过利用契约不变量(即,在事务边界上保留的逻辑公式)来执行验证。在本文中,我们提出了一种基于深度强化学习的新技术,用于自动学习用于证明算法安全性的契约不变量。我们的方法结合了一个离线训练阶段,在这个阶段中,验证者使用自己的验证尝试来学习契约不变量生成的策略。然后在验证时使用这种学习(神经)策略来预测可能的不变量,这些不变量对于证明算法安全性也很有用。我们在名为Cider的工具中实现了这个想法,并将其合并到现有的验证器中(基于改进类型检查),以证明算术安全性。我们的评估表明,Cider既提高了推断不变量的质量,也提高了推断时间,从而以更少的运行时断言实现了更快的验证和强化的契约。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信