Guy Amir, Osher Maayan, Tom Zelazny, Guy Katz, Michael Schapira
{"title":"Verifying the Generalization of Deep Learning to Out-of-Distribution Domains","authors":"Guy Amir, Osher Maayan, Tom Zelazny, Guy Katz, Michael Schapira","doi":"10.1007/s10817-024-09704-7","DOIUrl":null,"url":null,"abstract":"<p>Deep neural networks (DNNs) play a crucial role in the field of machine learning, demonstrating state-of-the-art performance across various application domains. However, despite their success, DNN-based models may occasionally exhibit challenges with <i>generalization</i>, i.e., may fail to handle inputs that were not encountered during training. This limitation is a significant challenge when it comes to deploying deep learning for safety-critical tasks, as well as in real-world settings characterized by substantial variability. We introduce a novel approach for harnessing DNN verification technology to identify DNN-driven decision rules that exhibit robust generalization to previously unencountered input domains. Our method assesses generalization within an input domain by measuring the level of agreement between <i>independently trained</i> deep neural networks for inputs in this domain. We also efficiently realize our approach by using off-the-shelf DNN verification engines, and extensively evaluate it on both supervised and unsupervised DNN benchmarks, including a deep reinforcement learning (DRL) system for Internet congestion control—demonstrating the applicability of our approach for real-world settings. Moreover, our research introduces a fresh objective for formal verification, offering the prospect of mitigating the challenges linked to deploying DNN-driven systems in real-world scenarios.</p>","PeriodicalId":15082,"journal":{"name":"Journal of Automated Reasoning","volume":"1 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Automated Reasoning","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10817-024-09704-7","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks (DNNs) play a crucial role in the field of machine learning, demonstrating state-of-the-art performance across various application domains. However, despite their success, DNN-based models may occasionally exhibit challenges with generalization, i.e., may fail to handle inputs that were not encountered during training. This limitation is a significant challenge when it comes to deploying deep learning for safety-critical tasks, as well as in real-world settings characterized by substantial variability. We introduce a novel approach for harnessing DNN verification technology to identify DNN-driven decision rules that exhibit robust generalization to previously unencountered input domains. Our method assesses generalization within an input domain by measuring the level of agreement between independently trained deep neural networks for inputs in this domain. We also efficiently realize our approach by using off-the-shelf DNN verification engines, and extensively evaluate it on both supervised and unsupervised DNN benchmarks, including a deep reinforcement learning (DRL) system for Internet congestion control—demonstrating the applicability of our approach for real-world settings. Moreover, our research introduces a fresh objective for formal verification, offering the prospect of mitigating the challenges linked to deploying DNN-driven systems in real-world scenarios.
期刊介绍:
The Journal of Automated Reasoning is an interdisciplinary journal that maintains a balance between theory, implementation and application. The spectrum of material published ranges from the presentation of a new inference rule with proof of its logical properties to a detailed account of a computer program designed to solve various problems in industry. The main fields covered are automated theorem proving, logic programming, expert systems, program synthesis and validation, artificial intelligence, computational logic, robotics, and various industrial applications. The papers share the common feature of focusing on several aspects of automated reasoning, a field whose objective is the design and implementation of a computer program that serves as an assistant in solving problems and in answering questions that require reasoning.
The Journal of Automated Reasoning provides a forum and a means for exchanging information for those interested purely in theory, those interested primarily in implementation, and those interested in specific research and industrial applications.