Explanation Ontology: A general-purpose, semantic representation for supporting user-centered explanations

IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Semantic Web Pub Date : 2023-05-18 DOI:10.3233/sw-233282
Shruthi Chari, O. Seneviratne, M. Ghalwash, Sola S. Shirai, Daniel Gruen, Pablo Meyer, P. Chakraborty, D. McGuinness
{"title":"Explanation Ontology: A general-purpose, semantic representation for supporting user-centered explanations","authors":"Shruthi Chari, O. Seneviratne, M. Ghalwash, Sola S. Shirai, Daniel Gruen, Pablo Meyer, P. Chakraborty, D. McGuinness","doi":"10.3233/sw-233282","DOIUrl":null,"url":null,"abstract":"In the past decade, trustworthy Artificial Intelligence (AI) has emerged as a focus for the AI community to ensure better adoption of AI models, and explainable AI is a cornerstone in this area. Over the years, the focus has shifted from building transparent AI methods to making recommendations on how to make black-box or opaque machine learning models and their results more understandable by experts and non-expert users. In our previous work, to address the goal of supporting user-centered explanations that make model recommendations more explainable, we developed an Explanation Ontology (EO). The EO is a general-purpose representation that was designed to help system designers connect explanations to their underlying data and knowledge. This paper addresses the apparent need for improved interoperability to support a wider range of use cases. We expand the EO, mainly in the system attributes contributing to explanations, by introducing new classes and properties to support a broader range of state-of-the-art explainer models. We present the expanded ontology model, highlighting the classes and properties that are important to model a larger set of fifteen literature-backed explanation types that are supported within the expanded EO. We build on these explanation type descriptions to show how to utilize the EO model to represent explanations in five use cases spanning the domains of finance, food, and healthcare. We include competency questions that evaluate the EO’s capabilities to provide guidance for system designers on how to apply our ontology to their own use cases. This guidance includes allowing system designers to query the EO directly and providing them exemplar queries to explore content in the EO represented use cases. We have released this significantly expanded version of the Explanation Ontology at https://purl.org/heals/eo and updated our resource website, https://tetherless-world.github.io/explanation-ontology, with supporting documentation. Overall, through the EO model, we aim to help system designers be better informed about explanations and support these explanations that can be composed, given their systems’ outputs from various AI models, including a mix of machine learning, logical and explainer models, and different types of data and knowledge available to their systems.","PeriodicalId":48694,"journal":{"name":"Semantic Web","volume":"295 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Semantic Web","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3233/sw-233282","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 2

Abstract

In the past decade, trustworthy Artificial Intelligence (AI) has emerged as a focus for the AI community to ensure better adoption of AI models, and explainable AI is a cornerstone in this area. Over the years, the focus has shifted from building transparent AI methods to making recommendations on how to make black-box or opaque machine learning models and their results more understandable by experts and non-expert users. In our previous work, to address the goal of supporting user-centered explanations that make model recommendations more explainable, we developed an Explanation Ontology (EO). The EO is a general-purpose representation that was designed to help system designers connect explanations to their underlying data and knowledge. This paper addresses the apparent need for improved interoperability to support a wider range of use cases. We expand the EO, mainly in the system attributes contributing to explanations, by introducing new classes and properties to support a broader range of state-of-the-art explainer models. We present the expanded ontology model, highlighting the classes and properties that are important to model a larger set of fifteen literature-backed explanation types that are supported within the expanded EO. We build on these explanation type descriptions to show how to utilize the EO model to represent explanations in five use cases spanning the domains of finance, food, and healthcare. We include competency questions that evaluate the EO’s capabilities to provide guidance for system designers on how to apply our ontology to their own use cases. This guidance includes allowing system designers to query the EO directly and providing them exemplar queries to explore content in the EO represented use cases. We have released this significantly expanded version of the Explanation Ontology at https://purl.org/heals/eo and updated our resource website, https://tetherless-world.github.io/explanation-ontology, with supporting documentation. Overall, through the EO model, we aim to help system designers be better informed about explanations and support these explanations that can be composed, given their systems’ outputs from various AI models, including a mix of machine learning, logical and explainer models, and different types of data and knowledge available to their systems.
解释本体:用于支持以用户为中心的解释的通用语义表示
在过去的十年里,可信赖的人工智能(AI)已经成为人工智能社区的焦点,以确保更好地采用人工智能模型,而可解释的人工智能是这一领域的基石。多年来,重点已经从构建透明的人工智能方法转移到就如何使黑箱或不透明的机器学习模型及其结果更容易被专家和非专业用户理解提出建议。在我们之前的工作中,为了解决支持以用户为中心的解释的目标,使模型建议更具可解释性,我们开发了一个解释本体(Explanation Ontology, EO)。EO是一种通用的表示形式,旨在帮助系统设计人员将解释与其基础数据和知识联系起来。本文提出了对改进互操作性的明显需求,以支持更广泛的用例。我们通过引入新的类和属性来支持更广泛的最先进的解释器模型,主要在有助于解释的系统属性中扩展EO。我们展示了扩展的本体模型,突出显示了类和属性,这些类和属性对于为扩展的EO中支持的15种文献支持的解释类型的更大集合建模很重要。我们以这些解释类型描述为基础,展示如何利用EO模型来表示跨越金融、食品和医疗保健领域的五个用例中的解释。我们包含了评估EO的能力的能力问题,为系统设计者提供关于如何将我们的本体应用到他们自己的用例中的指导。该指导包括允许系统设计人员直接查询EO,并为他们提供示例查询,以探索EO所表示用例中的内容。我们已经在https://purl.org/heals/eo上发布了这个重要的扩展版本的解释本体,并更新了我们的资源网站https://tetherless-world.github.io/explanation-ontology,并提供了支持文档。总的来说,通过EO模型,我们的目标是帮助系统设计师更好地了解解释,并支持这些可以组成的解释,因为他们的系统从各种人工智能模型输出,包括机器学习、逻辑和解释器模型的混合,以及系统可用的不同类型的数据和知识。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Semantic Web
Semantic Web COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCEC-COMPUTER SCIENCE, INFORMATION SYSTEMS
CiteScore
8.30
自引率
6.70%
发文量
68
期刊介绍: The journal Semantic Web – Interoperability, Usability, Applicability brings together researchers from various fields which share the vision and need for more effective and meaningful ways to share information across agents and services on the future internet and elsewhere. As such, Semantic Web technologies shall support the seamless integration of data, on-the-fly composition and interoperation of Web services, as well as more intuitive search engines. The semantics – or meaning – of information, however, cannot be defined without a context, which makes personalization, trust, and provenance core topics for Semantic Web research. New retrieval paradigms, user interfaces, and visualization techniques have to unleash the power of the Semantic Web and at the same time hide its complexity from the user. Based on this vision, the journal welcomes contributions ranging from theoretical and foundational research over methods and tools to descriptions of concrete ontologies and applications in all areas. We especially welcome papers which add a social, spatial, and temporal dimension to Semantic Web research, as well as application-oriented papers making use of formal semantics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信