{"title":"Research on Data Sufficiency and Model Validity toward AI for Critical Systems","authors":"Shintaro Hashimoto, Haruhiko Higuchi, N. Ishihama","doi":"10.1109/SACI55618.2022.9919529","DOIUrl":null,"url":null,"abstract":"Deep learning is used in many fields, but it has not been used in critical systems because testing that it works correctly is difficult. There are no comprehensive methods to ensure deep learning quality. Our research introduced effective test methods for deep learning in terms of the data sufficiency and model validity based on user requirements. To determine the data sufficiency, we proposed a method using the new concept of extrapolation and interpolation to classify out-of-domain uncertainty, domain-shift uncertainty, in-domain uncertainty, and data uncertainty. This method allows us to indicate what training data are missing. This paper proposes methods to visualize a basis for the model's decisions to determine the model's validity. We explain how to visualize the region of interest in an image and explain the basis for deep learning decisions from only human-understandable explanations using the combination of intermediate output and a decision tree.","PeriodicalId":105691,"journal":{"name":"2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SACI55618.2022.9919529","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning is used in many fields, but it has not been used in critical systems because testing that it works correctly is difficult. There are no comprehensive methods to ensure deep learning quality. Our research introduced effective test methods for deep learning in terms of the data sufficiency and model validity based on user requirements. To determine the data sufficiency, we proposed a method using the new concept of extrapolation and interpolation to classify out-of-domain uncertainty, domain-shift uncertainty, in-domain uncertainty, and data uncertainty. This method allows us to indicate what training data are missing. This paper proposes methods to visualize a basis for the model's decisions to determine the model's validity. We explain how to visualize the region of interest in an image and explain the basis for deep learning decisions from only human-understandable explanations using the combination of intermediate output and a decision tree.