Verifying Conformance of Neural Network Models: Invited Paper

M. Narasimhamurthy, Taisa Kushner, Souradeep Dutta, S. Sankaranarayanan
{"title":"Verifying Conformance of Neural Network Models: Invited Paper","authors":"M. Narasimhamurthy, Taisa Kushner, Souradeep Dutta, S. Sankaranarayanan","doi":"10.1109/iccad45719.2019.8942151","DOIUrl":null,"url":null,"abstract":"Neural networks are increasingly used as data-driven models for a wide variety of physical systems such as ground vehicles, airplanes, human physiology and automobile engines. These models are in-turn used for designing and verifying autonomous systems. The advantages of using neural networks include the ability to capture characteristics of particular systems using the available data. This is particularly advantageous for medical systems, wherein the data collected from individuals can be used to design devices that are well-adapted to a particular individual's unique physiological characteristics. At the same time, neural network models remain opaque: their structure makes them hard to understand and interpret by human developers. One key challenge lies in checking that neural network models of processes are “conformant” to the well established scientific (physical, chemical and biological) laws that underlie these models. In this paper, we will show how conformance often fails in models that are otherwise accurate and trained using the best practices in machine learning, with potentially serious consequences. We motivate the need for learning and verifying key conformance properties in data-driven models of the human insulin-glucose system and data-driven automobile models. We survey verification approaches for neural networks that can hold the key to learning and verifying conformance.","PeriodicalId":363364,"journal":{"name":"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccad45719.2019.8942151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Neural networks are increasingly used as data-driven models for a wide variety of physical systems such as ground vehicles, airplanes, human physiology and automobile engines. These models are in-turn used for designing and verifying autonomous systems. The advantages of using neural networks include the ability to capture characteristics of particular systems using the available data. This is particularly advantageous for medical systems, wherein the data collected from individuals can be used to design devices that are well-adapted to a particular individual's unique physiological characteristics. At the same time, neural network models remain opaque: their structure makes them hard to understand and interpret by human developers. One key challenge lies in checking that neural network models of processes are “conformant” to the well established scientific (physical, chemical and biological) laws that underlie these models. In this paper, we will show how conformance often fails in models that are otherwise accurate and trained using the best practices in machine learning, with potentially serious consequences. We motivate the need for learning and verifying key conformance properties in data-driven models of the human insulin-glucose system and data-driven automobile models. We survey verification approaches for neural networks that can hold the key to learning and verifying conformance.
验证神经网络模型的一致性:特邀论文
神经网络越来越多地被用作各种物理系统的数据驱动模型,如地面车辆、飞机、人体生理学和汽车发动机。这些模型依次用于设计和验证自主系统。使用神经网络的优点包括能够利用可用数据捕捉特定系统的特征。这对医疗系统特别有利,其中从个体收集的数据可用于设计能够很好地适应特定个体独特生理特征的设备。与此同时,神经网络模型仍然是不透明的:它们的结构使它们难以被人类开发人员理解和解释。一个关键的挑战在于检查过程的神经网络模型是否“符合”建立在这些模型基础上的科学(物理、化学和生物)定律。在本文中,我们将展示一致性如何在使用机器学习最佳实践进行训练的准确模型中经常失败,并带来潜在的严重后果。我们激发了在人类胰岛素-葡萄糖系统和数据驱动汽车模型的数据驱动模型中学习和验证关键一致性特性的需求。我们调查了神经网络的验证方法,这些方法可以掌握学习和验证一致性的关键。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信