{"title":"Context sight: model understanding and debugging via interpretable context","authors":"Jun Yuan, E. Bertini","doi":"10.1145/3546930.3547502","DOIUrl":null,"url":null,"abstract":"Model interpretation is increasingly important for successful model development and deployment. In recent years, many explanation methods are introduced to help humans understand how a machine learning model makes a decision on a specific instance. Recent studies show that contextualizing an individual model decision within a set of relevant examples can improve the model understanding. However, there is a lack of systematic study on what factors are considered when generating and using the context examples to explain model predictions, and how context examples help with model understanding and debugging in practice. In this work, we first identify a taxonomy of context generation and summarization through literature review. We then present Context Sight, a visual analytics system that integrates customized context generation and multiple-level context summarization to assist context exploration and interpretation. We evaluate the usefulness of the system through a detailed use case. This work is an initial step for a set of systematic research on how contextualization can help data scientists and practitioners understand and diagnose model behaviors, based on which we will gain a better understanding of the usage of context.","PeriodicalId":92279,"journal":{"name":"Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics. Workshop on Human-In-the-Loop Data Analytics (2nd : 2017 : Chicago, Ill.)","volume":"17 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics. Workshop on Human-In-the-Loop Data Analytics (2nd : 2017 : Chicago, Ill.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3546930.3547502","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Model interpretation is increasingly important for successful model development and deployment. In recent years, many explanation methods are introduced to help humans understand how a machine learning model makes a decision on a specific instance. Recent studies show that contextualizing an individual model decision within a set of relevant examples can improve the model understanding. However, there is a lack of systematic study on what factors are considered when generating and using the context examples to explain model predictions, and how context examples help with model understanding and debugging in practice. In this work, we first identify a taxonomy of context generation and summarization through literature review. We then present Context Sight, a visual analytics system that integrates customized context generation and multiple-level context summarization to assist context exploration and interpretation. We evaluate the usefulness of the system through a detailed use case. This work is an initial step for a set of systematic research on how contextualization can help data scientists and practitioners understand and diagnose model behaviors, based on which we will gain a better understanding of the usage of context.