Lisa Anne Hendricks, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Zeynep Akata
{"title":"Generating visual explanations with natural language","authors":"Lisa Anne Hendricks, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Zeynep Akata","doi":"10.1002/ail2.55","DOIUrl":"10.1002/ail2.55","url":null,"abstract":"<p>We generate natural language explanations for a fine-grained visual recognition task. Our explanations fulfill two criteria. First, explanations are <i>class discriminative</i>, meaning they mention attributes in an image which are important to identify a class. Second, explanations are <i>image relevant</i>, meaning they reflect the actual content of an image. Our system, composed of an explanation sampler and phrase-critic model, generates class discriminative and image relevant explanations. In addition, we demonstrate that our explanations can help humans decide whether to accept or reject an AI decision.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.55","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48200035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Stefik, Michael Youngblood, Peter Pirolli, Christian Lebiere, Robert Thomson, Robert Price, Lester D. Nelson, Robert Krivacic, Jacob Le, Konstantinos Mitsopoulos, Sterling Somers, Joel Schooler
{"title":"Explaining autonomous drones: An XAI journey","authors":"Mark Stefik, Michael Youngblood, Peter Pirolli, Christian Lebiere, Robert Thomson, Robert Price, Lester D. Nelson, Robert Krivacic, Jacob Le, Konstantinos Mitsopoulos, Sterling Somers, Joel Schooler","doi":"10.1002/ail2.54","DOIUrl":"10.1002/ail2.54","url":null,"abstract":"<p>COGLE (<i>CO</i>mmon <i>G</i>round <i>L</i>earning and <i>E</i>xplanation) is an explainable artificial intelligence (XAI) system where autonomous drones deliver supplies to field units in mountainous areas. The mission risks vary with topography, flight decisions, and mission goals. The missions engage a human plus AI team where users determine which of two AI-controlled drones is better for each mission. This article reports on the technical approach and findings of the project and reflects on challenges that complex combinatorial problems present for users, machine learning, user studies, and the context of use for XAI systems. COGLE creates explanations in multiple modalities. Narrative “What” explanations compare what each drone does on a mission and “Why” based on drone competencies determined from experiments using counterfactuals. Visual “Where” explanations highlight risks on maps to help users to interpret flight plans. One branch of the research studied whether the explanations helped users to predict drone performance. In this branch, a model induction user study showed that <i>post-decision explanations</i> had only a small effect in teaching users to determine by themselves which drone is better for a mission. Subsequent reflection suggests that supporting human plus AI decision making with <i>pre-decision explanations</i> is a better context for benefiting from explanations on combinatorial tasks.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.54","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47270045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arijit Ray, Michael Cogswell, Xiao Lin, Kamran Alipour, Ajay Divakaran, Yi Yao, Giedrius Burachas
{"title":"Generating and evaluating explanations of attended and error-inducing input regions for VQA models","authors":"Arijit Ray, Michael Cogswell, Xiao Lin, Kamran Alipour, Ajay Divakaran, Yi Yao, Giedrius Burachas","doi":"10.1002/ail2.51","DOIUrl":"https://doi.org/10.1002/ail2.51","url":null,"abstract":"<p>Attention maps, a popular heatmap-based explanation method for Visual Question Answering, are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30% and that our proxy helpfulness metrics correlate strongly (<math>\u0000 <mi>ρ</mi>\u0000 <mo>></mo>\u0000 <mn>0.97</mn></math>) with how well users can predict model correctness.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.51","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137830967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Goswami, Kshama D. Dhobale, R. Wavhale, B. Goswami, S. Banerjee
{"title":"Computer Vision and Machine Learning Techniques for Quantification and Predictive Modeling of Intracellular\u0000 Anti‐Cancer\u0000 Drug Delivery by Nanocarriers","authors":"S. Goswami, Kshama D. Dhobale, R. Wavhale, B. Goswami, S. Banerjee","doi":"10.1002/ail2.50","DOIUrl":"https://doi.org/10.1002/ail2.50","url":null,"abstract":"","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45346424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}