Art: Abstraction Refinement-Guided Training for Provably Correct Neural Networks

Xuankang Lin, He Zhu, R. Samanta, S. Jagannathan
{"title":"Art: Abstraction Refinement-Guided Training for Provably Correct Neural Networks","authors":"Xuankang Lin, He Zhu, R. Samanta, S. Jagannathan","doi":"10.34727/2020/isbn.978-3-85448-042-6_22","DOIUrl":null,"url":null,"abstract":"Artificial Neural Networks (ANNs) have demonstrated remarkable utility in various challenging machine learning applications. While formally verified properties of their behaviors are highly desired, they have proven notoriously difficult to derive and enforce. Existing approaches typically formulate this problem as a post facto analysis process. In this paper, we present a novel learning framework that ensures such formal guarantees are enforced by construction. Our technique enables training provably correct networks with respect to a broad class of safety properties, a capability that goes well-beyond existing approaches, without compromising much accuracy. Our key insight is that we can integrate an optimization-based abstraction refinement loop into the learning process and operate over dynamically constructed partitions of the input space that considers accuracy and safety objectives synergistically. The refinement procedure iteratively splits the input space from which training data is drawn, guided by the efficacy with which such partitions enable safety verification. We have implemented our approach in a tool (ART) and applied it to enforce general safety properties on unmanned aviator collision avoidance system ACAS Xu dataset and the Collision Detection dataset. Importantly, we empirically demonstrate that realizing safety does not come at the price of much accuracy. Our methodology demonstrates that an abstraction refinement methodology provides a meaningful pathway for building both accurate and correct machine learning networks.","PeriodicalId":105705,"journal":{"name":"2020 Formal Methods in Computer Aided Design (FMCAD)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Formal Methods in Computer Aided Design (FMCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34727/2020/isbn.978-3-85448-042-6_22","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

Artificial Neural Networks (ANNs) have demonstrated remarkable utility in various challenging machine learning applications. While formally verified properties of their behaviors are highly desired, they have proven notoriously difficult to derive and enforce. Existing approaches typically formulate this problem as a post facto analysis process. In this paper, we present a novel learning framework that ensures such formal guarantees are enforced by construction. Our technique enables training provably correct networks with respect to a broad class of safety properties, a capability that goes well-beyond existing approaches, without compromising much accuracy. Our key insight is that we can integrate an optimization-based abstraction refinement loop into the learning process and operate over dynamically constructed partitions of the input space that considers accuracy and safety objectives synergistically. The refinement procedure iteratively splits the input space from which training data is drawn, guided by the efficacy with which such partitions enable safety verification. We have implemented our approach in a tool (ART) and applied it to enforce general safety properties on unmanned aviator collision avoidance system ACAS Xu dataset and the Collision Detection dataset. Importantly, we empirically demonstrate that realizing safety does not come at the price of much accuracy. Our methodology demonstrates that an abstraction refinement methodology provides a meaningful pathway for building both accurate and correct machine learning networks.
艺术:抽象提炼指导训练可证明正确的神经网络
人工神经网络(ANNs)在各种具有挑战性的机器学习应用中显示出显著的实用性。虽然人们非常希望对其行为的属性进行正式验证,但事实证明,它们很难派生和执行。现有的方法通常将此问题表述为事后分析过程。在本文中,我们提出了一个新的学习框架,以确保这种形式的保证是由结构强制执行的。我们的技术能够训练可证明正确的网络,涉及广泛的安全属性,这是一种远远超出现有方法的能力,而不会影响太多的准确性。我们的关键见解是,我们可以将基于优化的抽象改进循环集成到学习过程中,并在考虑准确性和安全性目标的输入空间的动态构造分区上进行操作。细化过程迭代地分割从中提取训练数据的输入空间,并以这些分区能够进行安全验证的有效性为指导。我们已经在一个工具(ART)中实现了我们的方法,并将其应用于无人驾驶飞行员避碰系统ACAS Xu数据集和碰撞检测数据集上强制执行一般安全属性。重要的是,我们的经验证明,实现安全并不是以准确性为代价的。我们的方法表明,抽象细化方法为构建准确和正确的机器学习网络提供了有意义的途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信