M. Narasimhamurthy, Taisa Kushner, Souradeep Dutta, S. Sankaranarayanan
{"title":"验证神经网络模型的一致性:特邀论文","authors":"M. Narasimhamurthy, Taisa Kushner, Souradeep Dutta, S. Sankaranarayanan","doi":"10.1109/iccad45719.2019.8942151","DOIUrl":null,"url":null,"abstract":"Neural networks are increasingly used as data-driven models for a wide variety of physical systems such as ground vehicles, airplanes, human physiology and automobile engines. These models are in-turn used for designing and verifying autonomous systems. The advantages of using neural networks include the ability to capture characteristics of particular systems using the available data. This is particularly advantageous for medical systems, wherein the data collected from individuals can be used to design devices that are well-adapted to a particular individual's unique physiological characteristics. At the same time, neural network models remain opaque: their structure makes them hard to understand and interpret by human developers. One key challenge lies in checking that neural network models of processes are “conformant” to the well established scientific (physical, chemical and biological) laws that underlie these models. In this paper, we will show how conformance often fails in models that are otherwise accurate and trained using the best practices in machine learning, with potentially serious consequences. We motivate the need for learning and verifying key conformance properties in data-driven models of the human insulin-glucose system and data-driven automobile models. We survey verification approaches for neural networks that can hold the key to learning and verifying conformance.","PeriodicalId":363364,"journal":{"name":"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Verifying Conformance of Neural Network Models: Invited Paper\",\"authors\":\"M. Narasimhamurthy, Taisa Kushner, Souradeep Dutta, S. Sankaranarayanan\",\"doi\":\"10.1109/iccad45719.2019.8942151\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks are increasingly used as data-driven models for a wide variety of physical systems such as ground vehicles, airplanes, human physiology and automobile engines. These models are in-turn used for designing and verifying autonomous systems. The advantages of using neural networks include the ability to capture characteristics of particular systems using the available data. This is particularly advantageous for medical systems, wherein the data collected from individuals can be used to design devices that are well-adapted to a particular individual's unique physiological characteristics. At the same time, neural network models remain opaque: their structure makes them hard to understand and interpret by human developers. One key challenge lies in checking that neural network models of processes are “conformant” to the well established scientific (physical, chemical and biological) laws that underlie these models. In this paper, we will show how conformance often fails in models that are otherwise accurate and trained using the best practices in machine learning, with potentially serious consequences. We motivate the need for learning and verifying key conformance properties in data-driven models of the human insulin-glucose system and data-driven automobile models. We survey verification approaches for neural networks that can hold the key to learning and verifying conformance.\",\"PeriodicalId\":363364,\"journal\":{\"name\":\"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/iccad45719.2019.8942151\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccad45719.2019.8942151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Verifying Conformance of Neural Network Models: Invited Paper
Neural networks are increasingly used as data-driven models for a wide variety of physical systems such as ground vehicles, airplanes, human physiology and automobile engines. These models are in-turn used for designing and verifying autonomous systems. The advantages of using neural networks include the ability to capture characteristics of particular systems using the available data. This is particularly advantageous for medical systems, wherein the data collected from individuals can be used to design devices that are well-adapted to a particular individual's unique physiological characteristics. At the same time, neural network models remain opaque: their structure makes them hard to understand and interpret by human developers. One key challenge lies in checking that neural network models of processes are “conformant” to the well established scientific (physical, chemical and biological) laws that underlie these models. In this paper, we will show how conformance often fails in models that are otherwise accurate and trained using the best practices in machine learning, with potentially serious consequences. We motivate the need for learning and verifying key conformance properties in data-driven models of the human insulin-glucose system and data-driven automobile models. We survey verification approaches for neural networks that can hold the key to learning and verifying conformance.