Explainable AI in industry: practical challenges and lessons learned: implications tutorial

Krishna Gade, S. Geyik, K. Kenthapadi, Varun Mithal, Ankur Taly
{"title":"Explainable AI in industry: practical challenges and lessons learned: implications tutorial","authors":"Krishna Gade, S. Geyik, K. Kenthapadi, Varun Mithal, Ankur Taly","doi":"10.1145/3351095.3375664","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with the proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI have become far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability [2, 4]. Model explainability is considered a prerequisite for building trust and adoption of AI systems in high stakes domains such as lending and healthcare [1] requiring reliability, safety, and fairness. It is also critical to automated transportation, and other industrial applications with significant socio-economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling. As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale [5, 6, 8]. In fact, the field of explainability in AI/ML is at an inflexion point. There is a tremendous need from the societal, regulatory, commercial, end-user, and model developer perspectives. Consequently, practical and scalable explainability approaches are rapidly becoming available. The challenges for the research community include: (i) achieving consensus on the right notion of model explainability, (ii) identifying and formalizing explainability tasks from the perspectives of various stakeholders, and (iii) designing measures for evaluating explainability techniques. In this tutorial, we will first motivate the need for model interpretability and explainability in AI [3] from various perspectives. We will then provide a brief overview of several explainability techniques and tools. The rest of the tutorial will focus on the real-world application of explainability techniques in industry. We will present case studies spanning several domains such as: • Search and Recommendation systems: Understanding of search and recommendations systems, as well as how retrieval and ranking decisions happen in real-time [7]. Example applications include explanation of decisions made by an AI system towards job recommendations, ranking of potential candidates for job posters, and content recommendations. • Sales: Understanding of sales predictions in terms of customer up-sell/churn. • Fraud Detection: Examining and explaining AI systems that determine whether a content or event is fraudulent. • Lending: How to understand/interpret lending decisions made by an AI system. We will focus on the sociotechnical dimensions, practical challenges, and lessons learned during development and deployment of these systems, which would be beneficial for researchers and practitioners interested in explainable AI. Finally, we will discuss open challenges and research directions for the community.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3375664","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with the proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI have become far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability [2, 4]. Model explainability is considered a prerequisite for building trust and adoption of AI systems in high stakes domains such as lending and healthcare [1] requiring reliability, safety, and fairness. It is also critical to automated transportation, and other industrial applications with significant socio-economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling. As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale [5, 6, 8]. In fact, the field of explainability in AI/ML is at an inflexion point. There is a tremendous need from the societal, regulatory, commercial, end-user, and model developer perspectives. Consequently, practical and scalable explainability approaches are rapidly becoming available. The challenges for the research community include: (i) achieving consensus on the right notion of model explainability, (ii) identifying and formalizing explainability tasks from the perspectives of various stakeholders, and (iii) designing measures for evaluating explainability techniques. In this tutorial, we will first motivate the need for model interpretability and explainability in AI [3] from various perspectives. We will then provide a brief overview of several explainability techniques and tools. The rest of the tutorial will focus on the real-world application of explainability techniques in industry. We will present case studies spanning several domains such as: • Search and Recommendation systems: Understanding of search and recommendations systems, as well as how retrieval and ranking decisions happen in real-time [7]. Example applications include explanation of decisions made by an AI system towards job recommendations, ranking of potential candidates for job posters, and content recommendations. • Sales: Understanding of sales predictions in terms of customer up-sell/churn. • Fraud Detection: Examining and explaining AI systems that determine whether a content or event is fraudulent. • Lending: How to understand/interpret lending decisions made by an AI system. We will focus on the sociotechnical dimensions, practical challenges, and lessons learned during development and deployment of these systems, which would be beneficial for researchers and practitioners interested in explainable AI. Finally, we will discuss open challenges and research directions for the community.
工业中可解释的人工智能:实际挑战和经验教训:启示教程
人工智能在决定我们的日常体验方面发挥着越来越重要的作用。此外,随着基于人工智能的解决方案在招聘、贷款、刑事司法、医疗保健和教育等领域的扩散,人工智能对个人和职业的影响已经变得深远。人工智能模型在这些领域中所扮演的主导角色导致人们越来越关注这些模型中潜在的偏见,以及对模型透明度和可解释性的需求[2,4]。模型可解释性被认为是在高风险领域(如贷款和医疗保健[1])建立信任和采用人工智能系统的先决条件,这些领域需要可靠性、安全性和公平性。它对于自动化运输和其他具有重大社会经济影响的工业应用(如预测性维护、自然资源勘探和气候变化建模)也至关重要。因此,人工智能研究人员和从业者将注意力集中在可解释的人工智能上,以帮助他们更好地信任和理解大规模的模型[5,6,8]。事实上,AI/ML的可解释性领域正处于一个转折点。从社会、监管、商业、最终用户和模型开发人员的角度来看,这是一个巨大的需求。因此,实用且可扩展的可解释性方法正迅速变得可用。研究界面临的挑战包括:(i)就模型可解释性的正确概念达成共识,(ii)从不同利益相关者的角度确定和形式化可解释性任务,以及(iii)设计评估可解释性技术的措施。在本教程中,我们将首先从多个角度激发AI中模型可解释性和可解释性的需求[3]。然后,我们将简要概述几种可解释性技术和工具。本教程的其余部分将侧重于可解释性技术在工业中的实际应用。我们将展示跨越几个领域的案例研究,例如:•搜索和推荐系统:对搜索和推荐系统的理解,以及检索和排名决策如何实时发生[7]。示例应用包括解释人工智能系统对工作推荐做出的决定,为招聘海报的潜在候选人排名,以及内容推荐。•销售:根据客户追加销售/流失了解销售预测。•欺诈检测:检查和解释人工智能系统,确定内容或事件是否具有欺诈性。•借贷:如何理解/解释人工智能系统做出的借贷决策。我们将重点关注这些系统在开发和部署过程中的社会技术维度、实际挑战和经验教训,这将有利于对可解释人工智能感兴趣的研究人员和实践者。最后,我们将讨论对社区开放的挑战和研究方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信