Applied AI letters最新文献

筛选
英文 中文
Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project 将解释作为一种互动媒介:EQUAS(可解释问答系统)项目
Applied AI letters Pub Date : 2021-11-30 DOI: 10.1002/ail2.60
William Ferguson, Dhruv Batra, Raymond Mooney, Devi Parikh, Antonio Torralba, David Bau, David Diller, Josh Fasching, Jaden Fiotto-Kaufman, Yash Goyal, Jeff Miller, Kerry Moffitt, Alex Montes de Oca, Ramprasaath R. Selvaraju, Ayush Shrivastava, Jialin Wu, Stefan Lee
{"title":"Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project","authors":"William Ferguson,&nbsp;Dhruv Batra,&nbsp;Raymond Mooney,&nbsp;Devi Parikh,&nbsp;Antonio Torralba,&nbsp;David Bau,&nbsp;David Diller,&nbsp;Josh Fasching,&nbsp;Jaden Fiotto-Kaufman,&nbsp;Yash Goyal,&nbsp;Jeff Miller,&nbsp;Kerry Moffitt,&nbsp;Alex Montes de Oca,&nbsp;Ramprasaath R. Selvaraju,&nbsp;Ayush Shrivastava,&nbsp;Jialin Wu,&nbsp;Stefan Lee","doi":"10.1002/ail2.60","DOIUrl":"10.1002/ail2.60","url":null,"abstract":"<p>This letter is a retrospective analysis of our team's research for the Defense Advanced Research Projects Agency Explainable Artificial Intelligence project. Our initial approach was to use salience maps, English sentences, and lists of feature names to explain the behavior of deep-learning-based discriminative systems, with particular focus on visual question answering systems. We found that presenting static explanations along with answers led to limited positive effects. By exploring various combinations of machine and human explanation production and consumption, we evolved a notion of explanation as an interactive process that takes place usually between humans and artificial intelligence systems but sometimes within the software system. We realized that by interacting via explanations people could task and adapt machine learning (ML) agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system's performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.60","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41941827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards structured NLP interpretation via graph explainers 通过图形解释器实现结构化的NLP解释
Applied AI letters Pub Date : 2021-11-26 DOI: 10.1002/ail2.58
Hao Yuan, Fan Yang, Mengnan Du, Shuiwang Ji, Xia Hu
{"title":"Towards structured NLP interpretation via graph explainers","authors":"Hao Yuan,&nbsp;Fan Yang,&nbsp;Mengnan Du,&nbsp;Shuiwang Ji,&nbsp;Xia Hu","doi":"10.1002/ail2.58","DOIUrl":"10.1002/ail2.58","url":null,"abstract":"<p>Natural language processing (NLP) models have been increasingly deployed in real-world applications, and interpretation for textual data has also attracted dramatic attention recently. Most existing methods generate feature importance interpretation, which indicate the contribution of each word towards a specific model prediction. Text data typically possess highly structured characteristics and feature importance explanation cannot fully reveal the rich information contained in text. To bridge this gap, we propose to generate structured interpretations for textual data. Specifically, we pre-process the original text using dependency parsing, which could transform the text from sequences into graphs. Then graph neural networks (GNNs) are utilized to classify the transformed graphs. In particular, we explore two kinds of structured interpretation for pre-trained GNNs: edge-level interpretation and subgraph-level interpretation. Experimental results over three text datasets demonstrate that the structured interpretation can better reveal the structured knowledge encoded in the text. The experimental analysis further indicates that the proposed interpretations can faithfully reflect the decision-making process of the GNN model.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.58","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43677736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Explainable activity recognition in videos: Lessons learned 视频中可解释的活动识别:经验教训
Applied AI letters Pub Date : 2021-11-26 DOI: 10.1002/ail2.59
Chiradeep Roy, Mahsan Nourani, Donald R. Honeycutt, Jeremy E. Block, Tahrima Rahman, Eric D. Ragan, Nicholas Ruozzi, Vibhav Gogate
{"title":"Explainable activity recognition in videos: Lessons learned","authors":"Chiradeep Roy,&nbsp;Mahsan Nourani,&nbsp;Donald R. Honeycutt,&nbsp;Jeremy E. Block,&nbsp;Tahrima Rahman,&nbsp;Eric D. Ragan,&nbsp;Nicholas Ruozzi,&nbsp;Vibhav Gogate","doi":"10.1002/ail2.59","DOIUrl":"10.1002/ail2.59","url":null,"abstract":"<p>We consider the following activity recognition task: given a video, infer the set of activities being performed in the video and assign each frame to an activity. This task can be solved using modern deep learning architectures based on neural networks or conventional classifiers such as linear models and decision trees. While neural networks exhibit superior predictive performance as compared with decision trees and linear models, they are also uninterpretable and less explainable. We address this <i>accuracy-explanability gap</i> using a novel framework that feeds the output of a deep neural network to an interpretable, tractable probabilistic model called dynamic cutset networks, and performs joint reasoning over the two to answer questions. The neural network helps achieve high accuracy while dynamic cutset networks because of their polytime probabilistic reasoning capabilities make the system more explainable. We demonstrate the efficacy of our approach by using it to build three prototype systems that solve human-machine tasks having varying levels of difficulty using cooking videos as an accessible domain. We describe high-level technical details and key lessons learned in our human subjects evaluations of these systems.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.59","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43572819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Toward explainable and advisable model for self-driving cars 为自动驾驶汽车建立一个可解释且可行的模型
Applied AI letters Pub Date : 2021-11-23 DOI: 10.1002/ail2.56
Jinkyu Kim, Anna Rohrbach, Zeynep Akata, Suhong Moon, Teruhisa Misu, Yi-Ting Chen, Trevor Darrell, John Canny
{"title":"Toward explainable and advisable model for self-driving cars","authors":"Jinkyu Kim,&nbsp;Anna Rohrbach,&nbsp;Zeynep Akata,&nbsp;Suhong Moon,&nbsp;Teruhisa Misu,&nbsp;Yi-Ting Chen,&nbsp;Trevor Darrell,&nbsp;John Canny","doi":"10.1002/ail2.56","DOIUrl":"10.1002/ail2.56","url":null,"abstract":"<p>Humans learn to drive through both practice and theory, for example, by studying the rules, while most self-driving systems are limited to the former. Being able to incorporate human knowledge of typical causal driving behavior should benefit autonomous systems. We propose a new approach that learns vehicle control with the help of human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (eg, “I see a pedestrian crossing, so I stop”), and predict the controls, accordingly. Moreover, to enhance the interpretability of our system, we introduce a fine-grained attention mechanism that relies on semantic segmentation and object-centric RoI pooling. We show that our approach of training the autonomous system with human advice, grounded in a rich semantic representation, matches or outperforms prior work in terms of control prediction and explanation generation. Our approach also results in more interpretable visual explanations by visualizing object-centric attention maps. We evaluate our approach on a novel driving dataset with ground-truth human explanations, the Berkeley DeepDrive eXplanation (BDD-X) dataset.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.56","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46237887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Objective criteria for explanations of machine learning models 解释机器学习模型的客观标准
Applied AI letters Pub Date : 2021-11-23 DOI: 10.1002/ail2.57
Chih-Kuan Yeh, Pradeep Ravikumar
{"title":"Objective criteria for explanations of machine learning models","authors":"Chih-Kuan Yeh,&nbsp;Pradeep Ravikumar","doi":"10.1002/ail2.57","DOIUrl":"10.1002/ail2.57","url":null,"abstract":"<p>Objective criteria to evaluate the performance of machine learning (ML) model explanations are a critical ingredient in bringing greater rigor to the field of explainable artificial intelligence. In this article, we survey three of our proposed criteria that each target different classes of explanations. In the first, targeted at real-valued feature importance explanations, we define a class of “infidelity” measures that capture how well the explanations match the ML models. We show that instances of such infidelity minimizing explanations correspond to many popular recently proposed explanations and, moreover, can be shown to satisfy well-known game-theoretic axiomatic properties. In the second, targeted to feature set explanations, we define a robustness analysis-based criterion and show that deriving explainable feature sets based on the robustness criterion yields more qualitatively impressive explanations. Lastly, for sample explanations, we provide a decomposition-based criterion that allows us to provide very scalable and compelling classes of sample-based explanations.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.57","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44018783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generating visual explanations with natural language 用自然语言生成视觉解释
Applied AI letters Pub Date : 2021-11-22 DOI: 10.1002/ail2.55
Lisa Anne Hendricks, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Zeynep Akata
{"title":"Generating visual explanations with natural language","authors":"Lisa Anne Hendricks,&nbsp;Anna Rohrbach,&nbsp;Bernt Schiele,&nbsp;Trevor Darrell,&nbsp;Zeynep Akata","doi":"10.1002/ail2.55","DOIUrl":"10.1002/ail2.55","url":null,"abstract":"<p>We generate natural language explanations for a fine-grained visual recognition task. Our explanations fulfill two criteria. First, explanations are <i>class discriminative</i>, meaning they mention attributes in an image which are important to identify a class. Second, explanations are <i>image relevant</i>, meaning they reflect the actual content of an image. Our system, composed of an explanation sampler and phrase-critic model, generates class discriminative and image relevant explanations. In addition, we demonstrate that our explanations can help humans decide whether to accept or reject an AI decision.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.55","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48200035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Explaining autonomous drones: An XAI journey 解释自主无人机:XAI之旅
Applied AI letters Pub Date : 2021-11-22 DOI: 10.1002/ail2.54
Mark Stefik, Michael Youngblood, Peter Pirolli, Christian Lebiere, Robert Thomson, Robert Price, Lester D. Nelson, Robert Krivacic, Jacob Le, Konstantinos Mitsopoulos, Sterling Somers, Joel Schooler
{"title":"Explaining autonomous drones: An XAI journey","authors":"Mark Stefik,&nbsp;Michael Youngblood,&nbsp;Peter Pirolli,&nbsp;Christian Lebiere,&nbsp;Robert Thomson,&nbsp;Robert Price,&nbsp;Lester D. Nelson,&nbsp;Robert Krivacic,&nbsp;Jacob Le,&nbsp;Konstantinos Mitsopoulos,&nbsp;Sterling Somers,&nbsp;Joel Schooler","doi":"10.1002/ail2.54","DOIUrl":"10.1002/ail2.54","url":null,"abstract":"<p>COGLE (<i>CO</i>mmon <i>G</i>round <i>L</i>earning and <i>E</i>xplanation) is an explainable artificial intelligence (XAI) system where autonomous drones deliver supplies to field units in mountainous areas. The mission risks vary with topography, flight decisions, and mission goals. The missions engage a human plus AI team where users determine which of two AI-controlled drones is better for each mission. This article reports on the technical approach and findings of the project and reflects on challenges that complex combinatorial problems present for users, machine learning, user studies, and the context of use for XAI systems. COGLE creates explanations in multiple modalities. Narrative “What” explanations compare what each drone does on a mission and “Why” based on drone competencies determined from experiments using counterfactuals. Visual “Where” explanations highlight risks on maps to help users to interpret flight plans. One branch of the research studied whether the explanations helped users to predict drone performance. In this branch, a model induction user study showed that <i>post-decision explanations</i> had only a small effect in teaching users to determine by themselves which drone is better for a mission. Subsequent reflection suggests that supporting human plus AI decision making with <i>pre-decision explanations</i> is a better context for benefiting from explanations on combinatorial tasks.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.54","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47270045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Explaining robot policies 解释机器人政策
Applied AI letters Pub Date : 2021-11-13 DOI: 10.1002/ail2.52
Olivia Watkins, Sandy Huang, Julius Frost, Kush Bhatia, Eric Weiner, Pieter Abbeel, Trevor Darrell, Bryan Plummer, Kate Saenko, Anca Dragan
{"title":"Explaining robot policies","authors":"Olivia Watkins,&nbsp;Sandy Huang,&nbsp;Julius Frost,&nbsp;Kush Bhatia,&nbsp;Eric Weiner,&nbsp;Pieter Abbeel,&nbsp;Trevor Darrell,&nbsp;Bryan Plummer,&nbsp;Kate Saenko,&nbsp;Anca Dragan","doi":"10.1002/ail2.52","DOIUrl":"10.1002/ail2.52","url":null,"abstract":"<p>In order to interact with a robot or make wise decisions about where and how to deploy it in the real world, humans need to have an accurate mental model of how the robot acts in different situations. We propose to improve users' mental model of a robot by showing them examples of how the robot behaves in informative scenarios. We explore this in two settings. First, we show that when there are many possible environment states, users can more quickly understand the robot's policy if they are shown <i>critical states</i> where taking a particular action is important. Second, we show that when there is a distribution shift between training and test environment distributions, then it is more effective to show <i>exploratory states</i> that the robot does not visit naturally.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.52","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41445238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Methods and standards for research on explainable artificial intelligence: Lessons from intelligent tutoring systems 可解释人工智能的研究方法与标准:来自智能辅导系统的经验教训
Applied AI letters Pub Date : 2021-11-13 DOI: 10.1002/ail2.53
William J. Clancey, Robert R. Hoffman
{"title":"Methods and standards for research on explainable artificial intelligence: Lessons from intelligent tutoring systems","authors":"William J. Clancey,&nbsp;Robert R. Hoffman","doi":"10.1002/ail2.53","DOIUrl":"https://doi.org/10.1002/ail2.53","url":null,"abstract":"<p>The DARPA Explainable Artificial Intelligence (AI) (XAI) Program focused on generating explanations for AI programs that use machine learning techniques. This article highlights progress during the DARPA Program (2017-2021) relative to research since the 1970s in the field of intelligent tutoring systems (ITSs). ITS researchers learned a great deal about explanation that is directly relevant to XAI. We suggest opportunities for future XAI research deriving from ITS methods, and consider the challenges shared by both ITS and XAI in using AI to assist people in solving difficult problems effectively and efficiently.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.53","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137515652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating and evaluating explanations of attended and error-inducing input regions for VQA models 生成和评估VQA模型的参与和错误诱导输入区域的解释
Applied AI letters Pub Date : 2021-11-12 DOI: 10.1002/ail2.51
Arijit Ray, Michael Cogswell, Xiao Lin, Kamran Alipour, Ajay Divakaran, Yi Yao, Giedrius Burachas
{"title":"Generating and evaluating explanations of attended and error-inducing input regions for VQA models","authors":"Arijit Ray,&nbsp;Michael Cogswell,&nbsp;Xiao Lin,&nbsp;Kamran Alipour,&nbsp;Ajay Divakaran,&nbsp;Yi Yao,&nbsp;Giedrius Burachas","doi":"10.1002/ail2.51","DOIUrl":"https://doi.org/10.1002/ail2.51","url":null,"abstract":"<p>Attention maps, a popular heatmap-based explanation method for Visual Question Answering, are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30% and that our proxy helpfulness metrics correlate strongly (<math>\u0000 <mi>ρ</mi>\u0000 <mo>&gt;</mo>\u0000 <mn>0.97</mn></math>) with how well users can predict model correctness.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.51","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137830967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信