AI explainibility and acceptance; a case study for underwater mine hunting

IF 1.5 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Gj. Richard, Thales Dms, Imt Atlantique, France J. Habonneau, France D. Gueriot, France
{"title":"AI explainibility and acceptance; a case study for underwater mine hunting","authors":"Gj. Richard, Thales Dms, Imt Atlantique, France J. Habonneau, France D. Gueriot, France","doi":"10.1145/3635113","DOIUrl":null,"url":null,"abstract":"In critical operational context such as Mine Warfare, Automatic Target Recognition (ATR) algorithms are still hardly accepted. The complexity of their decision-making hampers understanding of predictions despite performances approaching human expert ones. Much research has been done in Explainability Artificial Intelligence (XAI) field to avoid this ”black box” effect. This field of research attempts to provide explanations for the decision-making of complex networks to promote their acceptability. Most of the explanation methods applied on image classifier networks provide heat maps. These maps highlight pixels according to their importance in decision-making. In this work, we first implement different XAI methods for the automatic classification of Synthetic Aperture Sonar (SAS) images by convolutional neural networks (CNN). These different methods are based on a Post-Hoc approach. We study and compare the different heat maps obtained. Secondly, we evaluate the benefits and the usefulness of explainability in an operational framework for collaboration. To do this, different user tests are carried out with different levels of assistance ranging from classification for an unaided operator, to classification with explained ATR. These tests allow us to study whether heat maps are useful in this context. The results obtained show that the heat maps explanation have a disputed utility according to the operators. Heat map presence does not increase the quality of the classifications. On the contrary, it even increases the response time. Nevertheless, half of operators see a certain usefulness in heat maps explanation.","PeriodicalId":44355,"journal":{"name":"ACM Journal of Data and Information Quality","volume":"8 3","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Journal of Data and Information Quality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3635113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

In critical operational context such as Mine Warfare, Automatic Target Recognition (ATR) algorithms are still hardly accepted. The complexity of their decision-making hampers understanding of predictions despite performances approaching human expert ones. Much research has been done in Explainability Artificial Intelligence (XAI) field to avoid this ”black box” effect. This field of research attempts to provide explanations for the decision-making of complex networks to promote their acceptability. Most of the explanation methods applied on image classifier networks provide heat maps. These maps highlight pixels according to their importance in decision-making. In this work, we first implement different XAI methods for the automatic classification of Synthetic Aperture Sonar (SAS) images by convolutional neural networks (CNN). These different methods are based on a Post-Hoc approach. We study and compare the different heat maps obtained. Secondly, we evaluate the benefits and the usefulness of explainability in an operational framework for collaboration. To do this, different user tests are carried out with different levels of assistance ranging from classification for an unaided operator, to classification with explained ATR. These tests allow us to study whether heat maps are useful in this context. The results obtained show that the heat maps explanation have a disputed utility according to the operators. Heat map presence does not increase the quality of the classifications. On the contrary, it even increases the response time. Nevertheless, half of operators see a certain usefulness in heat maps explanation.
人工智能的可解释性和可接受性;水下猎雷案例研究
在地雷战等关键作战环境中,自动目标识别(ATR)算法仍然很难被接受。尽管其性能接近人类专家的预测,但其决策的复杂性妨碍了对预测的理解。为了避免这种 "黑箱 "效应,可解释人工智能(XAI)领域开展了大量研究。这一研究领域试图为复杂网络的决策提供解释,以提高其可接受性。大多数应用于图像分类器网络的解释方法都提供了热图。这些热图根据像素在决策中的重要性突出显示像素。在这项工作中,我们首先采用不同的 XAI 方法,通过卷积神经网络(CNN)对合成孔径雷达(SAS)图像进行自动分类。这些不同的方法都是基于 "Post-Hoc "方法。我们对所获得的不同热图进行了研究和比较。其次,我们评估了可解释性在协作操作框架中的优势和实用性。为此,我们进行了不同级别的用户测试,包括从无人协助的操作员分类到有解释的 ATR 分类。通过这些测试,我们可以研究热图在这种情况下是否有用。测试结果表明,热图解释对操作员的作用存在争议。热图的存在并没有提高分类的质量。相反,它甚至增加了响应时间。不过,半数操作员认为热图解释有一定的实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Journal of Data and Information Quality
ACM Journal of Data and Information Quality COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
4.10
自引率
4.80%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信