Lifu Chen , Zhenhuan Fang , Jin Xing , Xingmin Cai
{"title":"How can geostatistics help us understand deep learning? An exploratory study in SAR-based aircraft detection","authors":"Lifu Chen , Zhenhuan Fang , Jin Xing , Xingmin Cai","doi":"10.1016/j.jag.2024.104185","DOIUrl":null,"url":null,"abstract":"<div><div>Deep Neural Networks (DNNs) have garnered significant attention across various research domains due to their impressive performance, particularly Convolutional Neural Networks (CNNs), known for their exceptional accuracy in image processing tasks. However, the opaque nature of DNNs has raised concerns about their trustworthiness, as users often cannot understand how the model arrives at its predictions or decisions. This lack of transparency is particularly problematic in critical fields such as healthcare, finance, and law, where the stakes are high. Consequently, there has been a surge in the development of explanation methods for DNNs. Typically, the effectiveness of these methods is assessed subjectively via human observation on the heatmaps or attribution maps generated by eXplanation AI (XAI) methods. In this paper, a novel GeoStatistics Explainable Artificial Intelligence (GSEAI) framework is proposed, which integrates spatial pattern analysis from Geostatistics with XAI algorithms to assess and compare XAI understandability. Global and local Moran’s I indices, commonly used to assess the spatial autocorrelation of geographic data, assist in comprehending the spatial distribution patterns of attribution maps produced by the XAI method, through measuring the levels of aggregation or dispersion. Interpreting and analyzing attribution maps by Moran’s I scattergram and LISA clustering maps provide an accurate global objective quantitative assessment of the spatial distribution of feature attribution and achieves a more understandable local interpretation. In this paper, we conduct experiments on aircraft detection in SAR images based on the widely used YOLOv5 network, and evaluate four mainstream XAI methods quantitatively and qualitatively. By using GSEAI to perform explanation analysis of the given DNN, we could gain more insights about the behavior of the network, to enhance the trustworthiness of DNN applications. To the best of our knowledge, this is the first time XAI has been integrated with geostatistical algorithms in SAR domain knowledge, which expands the analytical approaches of XAI and also promotes the development of XAI within SAR image analytics.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"134 ","pages":"Article 104185"},"PeriodicalIF":7.6000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of applied earth observation and geoinformation : ITC journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569843224005417","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Neural Networks (DNNs) have garnered significant attention across various research domains due to their impressive performance, particularly Convolutional Neural Networks (CNNs), known for their exceptional accuracy in image processing tasks. However, the opaque nature of DNNs has raised concerns about their trustworthiness, as users often cannot understand how the model arrives at its predictions or decisions. This lack of transparency is particularly problematic in critical fields such as healthcare, finance, and law, where the stakes are high. Consequently, there has been a surge in the development of explanation methods for DNNs. Typically, the effectiveness of these methods is assessed subjectively via human observation on the heatmaps or attribution maps generated by eXplanation AI (XAI) methods. In this paper, a novel GeoStatistics Explainable Artificial Intelligence (GSEAI) framework is proposed, which integrates spatial pattern analysis from Geostatistics with XAI algorithms to assess and compare XAI understandability. Global and local Moran’s I indices, commonly used to assess the spatial autocorrelation of geographic data, assist in comprehending the spatial distribution patterns of attribution maps produced by the XAI method, through measuring the levels of aggregation or dispersion. Interpreting and analyzing attribution maps by Moran’s I scattergram and LISA clustering maps provide an accurate global objective quantitative assessment of the spatial distribution of feature attribution and achieves a more understandable local interpretation. In this paper, we conduct experiments on aircraft detection in SAR images based on the widely used YOLOv5 network, and evaluate four mainstream XAI methods quantitatively and qualitatively. By using GSEAI to perform explanation analysis of the given DNN, we could gain more insights about the behavior of the network, to enhance the trustworthiness of DNN applications. To the best of our knowledge, this is the first time XAI has been integrated with geostatistical algorithms in SAR domain knowledge, which expands the analytical approaches of XAI and also promotes the development of XAI within SAR image analytics.
期刊介绍:
The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.