关键系统人工智能数据充分性与模型有效性研究

Shintaro Hashimoto, Haruhiko Higuchi, N. Ishihama
{"title":"关键系统人工智能数据充分性与模型有效性研究","authors":"Shintaro Hashimoto, Haruhiko Higuchi, N. Ishihama","doi":"10.1109/SACI55618.2022.9919529","DOIUrl":null,"url":null,"abstract":"Deep learning is used in many fields, but it has not been used in critical systems because testing that it works correctly is difficult. There are no comprehensive methods to ensure deep learning quality. Our research introduced effective test methods for deep learning in terms of the data sufficiency and model validity based on user requirements. To determine the data sufficiency, we proposed a method using the new concept of extrapolation and interpolation to classify out-of-domain uncertainty, domain-shift uncertainty, in-domain uncertainty, and data uncertainty. This method allows us to indicate what training data are missing. This paper proposes methods to visualize a basis for the model's decisions to determine the model's validity. We explain how to visualize the region of interest in an image and explain the basis for deep learning decisions from only human-understandable explanations using the combination of intermediate output and a decision tree.","PeriodicalId":105691,"journal":{"name":"2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on Data Sufficiency and Model Validity toward AI for Critical Systems\",\"authors\":\"Shintaro Hashimoto, Haruhiko Higuchi, N. Ishihama\",\"doi\":\"10.1109/SACI55618.2022.9919529\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning is used in many fields, but it has not been used in critical systems because testing that it works correctly is difficult. There are no comprehensive methods to ensure deep learning quality. Our research introduced effective test methods for deep learning in terms of the data sufficiency and model validity based on user requirements. To determine the data sufficiency, we proposed a method using the new concept of extrapolation and interpolation to classify out-of-domain uncertainty, domain-shift uncertainty, in-domain uncertainty, and data uncertainty. This method allows us to indicate what training data are missing. This paper proposes methods to visualize a basis for the model's decisions to determine the model's validity. We explain how to visualize the region of interest in an image and explain the basis for deep learning decisions from only human-understandable explanations using the combination of intermediate output and a decision tree.\",\"PeriodicalId\":105691,\"journal\":{\"name\":\"2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SACI55618.2022.9919529\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SACI55618.2022.9919529","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习在许多领域都有应用,但还没有在关键系统中使用,因为测试它的正确工作是很困难的。目前还没有全面的方法来保证深度学习的质量。我们的研究在数据充分性和基于用户需求的模型有效性方面为深度学习引入了有效的测试方法。为了确定数据的充分性,我们提出了一种利用外推和内插的新概念对域外不确定性、域移不确定性、域内不确定性和数据不确定性进行分类的方法。这种方法允许我们指出缺少哪些训练数据。本文提出了可视化模型决策基础的方法,以确定模型的有效性。我们解释了如何可视化图像中感兴趣的区域,并使用中间输出和决策树的组合,仅从人类可理解的解释中解释深度学习决策的基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Research on Data Sufficiency and Model Validity toward AI for Critical Systems
Deep learning is used in many fields, but it has not been used in critical systems because testing that it works correctly is difficult. There are no comprehensive methods to ensure deep learning quality. Our research introduced effective test methods for deep learning in terms of the data sufficiency and model validity based on user requirements. To determine the data sufficiency, we proposed a method using the new concept of extrapolation and interpolation to classify out-of-domain uncertainty, domain-shift uncertainty, in-domain uncertainty, and data uncertainty. This method allows us to indicate what training data are missing. This paper proposes methods to visualize a basis for the model's decisions to determine the model's validity. We explain how to visualize the region of interest in an image and explain the basis for deep learning decisions from only human-understandable explanations using the combination of intermediate output and a decision tree.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信