{"title":"Explanatory and Actionable Debugging for Machine Learning: A TableQA Demonstration","authors":"Minseok Cho, Gyeongbok Lee, Seung-won Hwang","doi":"10.1145/3331184.3331404","DOIUrl":null,"url":null,"abstract":"Question answering from tables (TableQA) extracting answers from tables from the question given in natural language, has been actively studied. Existing models have been trained and evaluated mostly with respect to answer accuracy using public benchmark datasets such as WikiSQL. The goal of this demonstration is to show a debugging tool for such models, explaining answers to humans, known as explanatory debugging. Our key distinction is making it \"actionable\" to allow users to directly correct models upon explanation. Specifically, our tool surfaces annotation and models errors for users to correct, and provides actionable insights.","PeriodicalId":20700,"journal":{"name":"Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3331184.3331404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Question answering from tables (TableQA) extracting answers from tables from the question given in natural language, has been actively studied. Existing models have been trained and evaluated mostly with respect to answer accuracy using public benchmark datasets such as WikiSQL. The goal of this demonstration is to show a debugging tool for such models, explaining answers to humans, known as explanatory debugging. Our key distinction is making it "actionable" to allow users to directly correct models upon explanation. Specifically, our tool surfaces annotation and models errors for users to correct, and provides actionable insights.