基于实例的可解释人工智能及其在遥感图像分类中的应用

Shin-nosuke Ishikawa, Masato Todo, M. Taki, Y. Uchiyama, Kazunari Matsunaga, Pei-Ru Lin, Taiki Ogihara, Masao Yasui
{"title":"基于实例的可解释人工智能及其在遥感图像分类中的应用","authors":"Shin-nosuke Ishikawa, Masato Todo, M. Taki, Y. Uchiyama, Kazunari Matsunaga, Pei-Ru Lin, Taiki Ogihara, Masao Yasui","doi":"10.48550/arXiv.2302.01526","DOIUrl":null,"url":null,"abstract":"We present a method of explainable artificial intelligence (XAI),\"What I Know (WIK)\", to provide additional information to verify the reliability of a deep learning model by showing an example of an instance in a training dataset that is similar to the input data to be inferred and demonstrate it in a remote sensing image classification task. One of the expected roles of XAI methods is verifying whether inferences of a trained machine learning model are valid for an application, and it is an important factor that what datasets are used for training the model as well as the model architecture. Our data-centric approach can help determine whether the training dataset is sufficient for each inference by checking the selected example data. If the selected example looks similar to the input data, we can confirm that the model was not trained on a dataset with a feature distribution far from the feature of the input data. With this method, the criteria for selecting an example are not merely data similarity with the input data but also data similarity in the context of the model task. Using a remote sensing image dataset from the Sentinel-2 satellite, the concept was successfully demonstrated with reasonably selected examples. This method can be applied to various machine-learning tasks, including classification and regression.","PeriodicalId":13664,"journal":{"name":"Int. J. Appl. Earth Obs. Geoinformation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Example-Based Explainable AI and its Application for Remote Sensing Image Classification\",\"authors\":\"Shin-nosuke Ishikawa, Masato Todo, M. Taki, Y. Uchiyama, Kazunari Matsunaga, Pei-Ru Lin, Taiki Ogihara, Masao Yasui\",\"doi\":\"10.48550/arXiv.2302.01526\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a method of explainable artificial intelligence (XAI),\\\"What I Know (WIK)\\\", to provide additional information to verify the reliability of a deep learning model by showing an example of an instance in a training dataset that is similar to the input data to be inferred and demonstrate it in a remote sensing image classification task. One of the expected roles of XAI methods is verifying whether inferences of a trained machine learning model are valid for an application, and it is an important factor that what datasets are used for training the model as well as the model architecture. Our data-centric approach can help determine whether the training dataset is sufficient for each inference by checking the selected example data. If the selected example looks similar to the input data, we can confirm that the model was not trained on a dataset with a feature distribution far from the feature of the input data. With this method, the criteria for selecting an example are not merely data similarity with the input data but also data similarity in the context of the model task. Using a remote sensing image dataset from the Sentinel-2 satellite, the concept was successfully demonstrated with reasonably selected examples. This method can be applied to various machine-learning tasks, including classification and regression.\",\"PeriodicalId\":13664,\"journal\":{\"name\":\"Int. J. Appl. Earth Obs. Geoinformation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Appl. Earth Obs. Geoinformation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2302.01526\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Appl. Earth Obs. Geoinformation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2302.01526","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

我们提出了一种可解释的人工智能(XAI)方法,“What I Know (WIK)”,通过展示训练数据集中的实例示例来提供额外的信息来验证深度学习模型的可靠性,该示例与要推断的输入数据相似,并在遥感图像分类任务中进行演示。XAI方法的预期角色之一是验证经过训练的机器学习模型的推断是否对应用程序有效,并且使用哪些数据集来训练模型以及模型体系结构是一个重要因素。我们以数据为中心的方法可以通过检查选定的示例数据来帮助确定训练数据集是否足以进行每个推理。如果选择的示例看起来与输入数据相似,我们可以确认模型不是在特征分布远离输入数据特征的数据集上训练的。使用这种方法,选择示例的标准不仅是与输入数据的数据相似度,还包括模型任务上下文中的数据相似度。利用Sentinel-2卫星的遥感图像数据集,通过合理选择的示例成功地演示了该概念。这种方法可以应用于各种机器学习任务,包括分类和回归。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Example-Based Explainable AI and its Application for Remote Sensing Image Classification
We present a method of explainable artificial intelligence (XAI),"What I Know (WIK)", to provide additional information to verify the reliability of a deep learning model by showing an example of an instance in a training dataset that is similar to the input data to be inferred and demonstrate it in a remote sensing image classification task. One of the expected roles of XAI methods is verifying whether inferences of a trained machine learning model are valid for an application, and it is an important factor that what datasets are used for training the model as well as the model architecture. Our data-centric approach can help determine whether the training dataset is sufficient for each inference by checking the selected example data. If the selected example looks similar to the input data, we can confirm that the model was not trained on a dataset with a feature distribution far from the feature of the input data. With this method, the criteria for selecting an example are not merely data similarity with the input data but also data similarity in the context of the model task. Using a remote sensing image dataset from the Sentinel-2 satellite, the concept was successfully demonstrated with reasonably selected examples. This method can be applied to various machine-learning tasks, including classification and regression.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信