Applied AI letters最新文献

筛选
英文 中文
XAITK: The explainable AI toolkit XAITK:可解释的AI工具包
Applied AI letters Pub Date : 2021-10-18 DOI: 10.1002/ail2.40
Brian Hu, Paul Tunison, Bhavan Vasu, Nitesh Menon, Roddy Collins, Anthony Hoogs
{"title":"XAITK: The explainable AI toolkit","authors":"Brian Hu,&nbsp;Paul Tunison,&nbsp;Bhavan Vasu,&nbsp;Nitesh Menon,&nbsp;Roddy Collins,&nbsp;Anthony Hoogs","doi":"10.1002/ail2.40","DOIUrl":"10.1002/ail2.40","url":null,"abstract":"<p>Recent advances in artificial intelligence (AI), driven mainly by deep neural networks, have yielded remarkable progress in fields, such as computer vision, natural language processing, and reinforcement learning. Despite these successes, the inability to predict how AI systems will behave “in the wild” impacts almost all stages of planning and deployment, including research and development, verification and validation, and user trust and acceptance. The field of explainable artificial intelligence (XAI) seeks to develop techniques enabling AI algorithms to generate explanations of their results; generally these are human-interpretable representations or visualizations that are meant to “explain” how the system produced its outputs. We introduce the Explainable AI Toolkit (XAITK), a DARPA-sponsored effort that builds on results from the 4-year DARPA XAI program. The XAITK has two goals: (a) to consolidate research results from DARPA XAI into a single publicly accessible repository; and (b) to identify operationally relevant capabilities developed on DARPA XAI and assist in their transition to interested partners. We first describe the XAITK website and associated capabilities. These place the research results from DARPA XAI in the wider context of general research in the field of XAI, and include performer contributions of code, data, publications, and reports. We then describe the XAITK analytics and autonomy software frameworks. These are Python-based frameworks focused on particular XAI domains, and designed to provide a single integration endpoint for multiple algorithm implementations from across DARPA XAI. Each framework generalizes APIs for system-level data and control while providing a plugin interface for existing and future algorithm implementations. The XAITK project can be followed at: https://xaitk.org.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.40","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48237805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Explainable neural computation via stack neural module networks 可解释的神经计算通过堆栈神经模块网络
Applied AI letters Pub Date : 2021-10-16 DOI: 10.1002/ail2.39
Ronghang Hu, Jacob Andreas, Trevor Darrell, Kate Saenko
{"title":"Explainable neural computation via stack neural module networks","authors":"Ronghang Hu,&nbsp;Jacob Andreas,&nbsp;Trevor Darrell,&nbsp;Kate Saenko","doi":"10.1002/ail2.39","DOIUrl":"https://doi.org/10.1002/ail2.39","url":null,"abstract":"<p>In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional <i>reasoning</i> process, and, in many applications, the need for this reasoning process to be <i>interpretable</i> to assist users in both development and prediction. Existing models designed to produce interpretable traces of their decision-making process typically require these traces to be supervised at training time. In this paper, we present a novel neural modular approach that performs compositional reasoning by automatically inducing a desired subtask decomposition without relying on strong supervision. Our model allows linking different reasoning tasks through shared modules that handle common routines across tasks. Experiments show that the model is more interpretable to human evaluators compared to other state-of-the-art models: users can better understand the model's underlying reasoning procedure and predict when it will succeed or fail based on observing its intermediate outputs.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.39","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137529182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abstraction, validation, and generalization for explainable artificial intelligence 可解释人工智能的抽象、验证和泛化
Applied AI letters Pub Date : 2021-09-02 DOI: 10.1002/ail2.37
Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto
{"title":"Abstraction, validation, and generalization for explainable artificial intelligence","authors":"Scott Cheng-Hsin Yang,&nbsp;Tomas Folke,&nbsp;Patrick Shafto","doi":"10.1002/ail2.37","DOIUrl":"https://doi.org/10.1002/ail2.37","url":null,"abstract":"<p>Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision-making must be understandable to a wide range of stakeholders. Methods to explain artificial intelligence (AI) have been proposed to answer this challenge, but a lack of theory impedes the development of systematic abstractions, which are necessary for cumulative knowledge gains. We propose Bayesian Teaching as a framework for unifying explainable AI (XAI) by integrating machine learning and human learning. Bayesian Teaching formalizes explanation as a communication act of an explainer to shift the beliefs of an explainee. This formalization decomposes a wide range of XAI methods into four components: (a) the target inference, (b) the explanation, (c) the explainee model, and (d) the explainer model. The abstraction afforded by Bayesian Teaching to decompose XAI methods elucidates the invariances among them. The decomposition of XAI systems enables modular validation, as each of the first three components listed can be tested semi-independently. This decomposition also promotes generalization through recombination of components from different XAI systems, which facilitates the generation of novel variants. These new variants need not be evaluated one by one provided that each component has been validated, leading to an exponential decrease in development time. Finally, by making the goal of explanation explicit, Bayesian Teaching helps developers to assess how suitable an XAI system is for its intended real-world use case. Thus, Bayesian Teaching provides a theoretical framework that encourages systematic, scientific investigation of XAI.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.37","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137781087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From “no clear winner” to an effective Explainable Artificial Intelligence process: An empirical journey 从“没有明确的赢家”到有效的可解释的人工智能过程:经验之旅
Applied AI letters Pub Date : 2021-07-18 DOI: 10.1002/ail2.36
Jonathan Dodge, Andrew Anderson, Roli Khanna, Jed Irvine, Rupika Dikkala, Kin-Ho Lam, Delyar Tabatabai, Anita Ruangrotsakun, Zeyad Shureih, Minsuk Kahng, Alan Fern, Margaret Burnett
{"title":"From “no clear winner” to an effective Explainable Artificial Intelligence process: An empirical journey","authors":"Jonathan Dodge,&nbsp;Andrew Anderson,&nbsp;Roli Khanna,&nbsp;Jed Irvine,&nbsp;Rupika Dikkala,&nbsp;Kin-Ho Lam,&nbsp;Delyar Tabatabai,&nbsp;Anita Ruangrotsakun,&nbsp;Zeyad Shureih,&nbsp;Minsuk Kahng,&nbsp;Alan Fern,&nbsp;Margaret Burnett","doi":"10.1002/ail2.36","DOIUrl":"10.1002/ail2.36","url":null,"abstract":"<p>“In what circumstances would you want this AI to make decisions on your behalf?” We have been investigating how to enable a user of an Artificial Intelligence-powered system to answer questions like this through a series of empirical studies, a group of which we summarize here. We began the series by (a) comparing four explanation configurations of saliency explanations and/or reward explanations. From this study we learned that, although some configurations had significant strengths, no one configuration was a clear “winner.” This result led us to hypothesize that one reason for the low success rates Explainable AI (XAI) research has in enabling users to create a coherent mental model is that the AI itself does not have a coherent model. This hypothesis led us to (b) build a model-based agent, to compare explaining it with explaining a model-free agent. Our results were encouraging, but we then realized that participants' cognitive energy was being sapped by having to create not only a mental model, but also a process by which to create that mental model. This realization led us to (c) create such a process (which we term <i>After-Action Review for AI</i> or “AAR/AI”) for them, integrate it into the explanation environment, and compare participants' success with AAR/AI scaffolding vs without it. Our AAR/AI studies' results showed that AAR/AI participants were more effective assessing the AI than non-AAR/AI participants, with significantly better precision and significantly better recall at finding the AI's reasoning flaws.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.36","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113253994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A practical approach for applying machine learning in the detection and classification of network devices used in building management 将机器学习应用于楼宇管理中网络设备的检测和分类的实用方法
Applied AI letters Pub Date : 2021-07-04 DOI: 10.1002/ail2.35
Maroun Touma, Shalisha Witherspoon, Shonda Witherspoon, Isabelle Crawford-Eng
{"title":"A practical approach for applying machine learning in the detection and classification of network devices used in building management","authors":"Maroun Touma,&nbsp;Shalisha Witherspoon,&nbsp;Shonda Witherspoon,&nbsp;Isabelle Crawford-Eng","doi":"10.1002/ail2.35","DOIUrl":"https://doi.org/10.1002/ail2.35","url":null,"abstract":"<p>With the increasing deployment of smart buildings and infrastructure, supervisory control and data acquisition (SCADA) devices and the underlying IT network have become essential elements for the proper operations of these highly complex systems. Of course, with the increase in automation and the proliferation of SCADA devices, a corresponding increase in surface area of attack on critical infrastructure has increased. Understanding device behaviors in terms of known and understood or potentially qualified activities vs unknown and potentially nefarious activities in near-real time is a key component of any security solution. In this paper, we investigate the challenges with building robust machine learning models to identify unknowns purely from network traffic both inside and outside firewalls, starting with missing or inconsistent labels across sites, feature engineering and learning, temporal dependencies and analysis, and training data quality (including small sample sizes) for both shallow and deep learning methods. To demonstrate these challenges and the capabilities we have developed, we focus on Building Automation and Control networks (BACnet) from a private commercial building system. Our results show that “Model Zoo” built from binary classifiers based on each device or behavior combined with an ensemble classifier integrating information from all classifiers provides a reliable methodology to identify unknown devices as well as determining specific known devices when the device type is in the training set. The capability of the Model Zoo framework is shown to be directly linked to feature engineering and learning, and the dependency of the feature selection varies depending on both the binary and ensemble classifiers as well.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ail2.35","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137795566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards an affordable magnetomyography instrumentation and low model complexity approach for labour imminency prediction using a novel multiresolution analysis 使用新颖的多分辨率分析,实现负担得起的磁断层成像仪器和低模型复杂性的劳动迫切性预测方法
Applied AI letters Pub Date : 2021-06-26 DOI: 10.1002/ail2.34
Ejay Nsugbe, Ibrahim Sanusi
{"title":"Towards an affordable magnetomyography instrumentation and low model complexity approach for labour imminency prediction using a novel multiresolution analysis","authors":"Ejay Nsugbe,&nbsp;Ibrahim Sanusi","doi":"10.1002/ail2.34","DOIUrl":"https://doi.org/10.1002/ail2.34","url":null,"abstract":"<p>The ability to predict the onset of labour is seen to be an important tool in a clinical setting. Magnetomyography has shown promise in the area of labour imminency prediction, but its clinical application remains limited due to high resource consumption associated with its broad number of channels. In this study, five electrode channels, which account for 3.3% of the total, are used alongside a novel signal decomposition algorithm and low complexity classifiers (logistic regression and linear-SVM) to classify between labour imminency due within 0 to 48 hours and &gt;48 hours. The results suggest that the parsimonious representation comprising of five electrode channels and novel signal decomposition method alongside the candidate classifiers could allow for greater affordability and hence clinical viability of the magnetomyography-based prediction model, which carries a good degree of model interpretability. The results showed around a 20% increase on average for the novel decomposition method, alongside a reduced group of features across the various classification metrics considered for both the logistic regression and support vector machine.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ail2.34","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137548038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Methods and Standards for Research on Explainable Artificial Intelligence: Lessons from Intelligent Tutoring Systems 可解释人工智能的研究方法与标准:来自智能辅导系统的经验教训
Applied AI letters Pub Date : 2021-06-08 DOI: 10.22541/AU.162317004.45114437/V1
Robert Hoffman, W. Clancey
{"title":"Methods and Standards for Research on Explainable Artificial Intelligence: Lessons from Intelligent Tutoring Systems","authors":"Robert Hoffman, W. Clancey","doi":"10.22541/AU.162317004.45114437/V1","DOIUrl":"https://doi.org/10.22541/AU.162317004.45114437/V1","url":null,"abstract":"We reflect on the progress in the area of Explainable AI (XAI) Program\u0000relative to previous work in the area of intelligent tutoring systems\u0000(ITS). A great deal was learned about explanation—and many challenges\u0000uncovered—in research that is directly relevant to XAI. We suggest\u0000opportunities for future XAI research deriving from ITS methods, as well\u0000as the challenges shared by both ITS and XAI in using AI to assist\u0000people in solving difficult problems effectively and efficiently.","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49012473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Adapting natural language processing for technical text 采用自然语言处理技术文本
Applied AI letters Pub Date : 2021-06-02 DOI: 10.1002/ail2.33
Alden Dima, Sarah Lukens, Melinda Hodkiewicz, Thurston Sexton, Michael P. Brundage
{"title":"Adapting natural language processing for technical text","authors":"Alden Dima,&nbsp;Sarah Lukens,&nbsp;Melinda Hodkiewicz,&nbsp;Thurston Sexton,&nbsp;Michael P. Brundage","doi":"10.1002/ail2.33","DOIUrl":"10.1002/ail2.33","url":null,"abstract":"<p>Despite recent dramatic successes, natural language processing (NLP) is not ready to address a variety of real-world problems. Its reliance on large standard corpora, a training and evaluation paradigm that favors the learning of shallow heuristics, and large computational resource requirements, makes domain-specific application of even the most successful NLP techniques difficult. This paper proposes technical language processing (TLP) which brings engineering principles and practices to NLP specifically for the purpose of extracting actionable information from language generated by experts in their technical tasks, systems, and processes. TLP envisages NLP as a socio-technical system rather than as an algorithmic pipeline. We describe how the TLP approach to meaning and generalization differs from that of NLP, how data quantity and quality can be addressed in engineering technical domains, and the potential risks of not adapting NLP for technical use cases. Engineering problems can benefit immensely from the inclusion of knowledge from unstructured data, currently unavailable due to issues with out of the box NLP packages. We illustrate the TLP approach by focusing on maintenance in industrial organizations as a case-study.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ail2.33","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9679524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Issue Information 问题信息
Applied AI letters Pub Date : 2021-06-01 DOI: 10.1002/ail2.13
{"title":"Issue Information","authors":"","doi":"10.1002/ail2.13","DOIUrl":"https://doi.org/10.1002/ail2.13","url":null,"abstract":"","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ail2.13","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48347469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep imputation on large-scale drug discovery data 大规模药物发现数据的深度归算
Applied AI letters Pub Date : 2021-05-20 DOI: 10.1002/ail2.31
Benedict W. J. Irwin, Thomas M. Whitehead, Scott Rowland, Samar Y. Mahmoud, Gareth J. Conduit, Matthew D. Segall
{"title":"Deep imputation on large-scale drug discovery data","authors":"Benedict W. J. Irwin,&nbsp;Thomas M. Whitehead,&nbsp;Scott Rowland,&nbsp;Samar Y. Mahmoud,&nbsp;Gareth J. Conduit,&nbsp;Matthew D. Segall","doi":"10.1002/ail2.31","DOIUrl":"https://doi.org/10.1002/ail2.31","url":null,"abstract":"<p>More accurate predictions of the biological properties of chemical compounds would guide the selection and design of new compounds in drug discovery and help to address the enormous cost and low success-rate of pharmaceutical R&amp;D. However, this domain presents a significant challenge for AI methods due to the sparsity of compound data and the noise inherent in results from biological experiments. In this paper, we demonstrate how data imputation using deep learning provides substantial improvements over quantitative structure-activity relationship (QSAR) machine learning models that are widely applied in drug discovery. We present the largest-to-date successful application of deep-learning imputation to datasets which are comparable in size to the corporate data repository of a pharmaceutical company (678 994 compounds by 1166 endpoints). We demonstrate this improvement for three areas of practical application linked to distinct use cases; (a) target activity data compiled from a range of drug discovery projects, (b) a high value and heterogeneous dataset covering complex absorption, distribution, metabolism, and elimination properties, and (c) high throughput screening data, testing the algorithm's limits on early stage noisy and very sparse data. Achieving median coefficients of determination, <i>R</i><sup>2</sup>, of 0.69, 0.36, and 0.43, respectively, across these applications, the deep learning imputation method offers an unambiguous improvement over random forest QSAR methods, which achieve median <i>R</i><sup>2</sup> values of 0.28, 0.19, and 0.23, respectively. We also demonstrate that robust estimates of the uncertainties in the predicted values correlate strongly with the accuracies in prediction, enabling greater confidence in decision-making based on the imputed values.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ail2.31","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137944497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信