Bhavya Karki, Fan Hu, Nithin Haridas, S. Barot, Zihua Liu, Lucile Callebert, Matthias Grabmair, A. Tomasic
{"title":"Question answering via web extracted tables","authors":"Bhavya Karki, Fan Hu, Nithin Haridas, S. Barot, Zihua Liu, Lucile Callebert, Matthias Grabmair, A. Tomasic","doi":"10.1145/3329859.3329879","DOIUrl":null,"url":null,"abstract":"Question answering (QA) provides answers to a wide range of questions but is still limited in the complexity of reasoning and the breadth of accessible data sources. In this paper, we describe a dataset and baseline results for a question answering system that utilizes web tables. The dataset is derived from commonly asked questions on the web, and their corresponding answers found in tables on websites. Our dataset is novel in that every question is paired with a table of a different signature, so learning must automatically generalize across domains. Each QA training instance comprises a table, a natural language question, and a corresponding structured SQL query. We build our model by dividing question answering into a sequence of tasks, including table retrieval and question element classification, and conduct experiments to measure the performance of each task. In a traditional machine learning design manner, we extract various features specific to each task, apply a neural model, and then compose a full pipeline which constructs the SQL query from its parts. Our work provides quantitative results and error analysis for each task, and identifies in detail the reasoning required to generate SQL expressions from natural language questions. This analysis of reasoning informs future models based on neural machine learning.","PeriodicalId":118194,"journal":{"name":"Proceedings of the Second International Workshop on Exploiting Artificial Intelligence Techniques for Data Management","volume":"2017 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Second International Workshop on Exploiting Artificial Intelligence Techniques for Data Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3329859.3329879","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Question answering (QA) provides answers to a wide range of questions but is still limited in the complexity of reasoning and the breadth of accessible data sources. In this paper, we describe a dataset and baseline results for a question answering system that utilizes web tables. The dataset is derived from commonly asked questions on the web, and their corresponding answers found in tables on websites. Our dataset is novel in that every question is paired with a table of a different signature, so learning must automatically generalize across domains. Each QA training instance comprises a table, a natural language question, and a corresponding structured SQL query. We build our model by dividing question answering into a sequence of tasks, including table retrieval and question element classification, and conduct experiments to measure the performance of each task. In a traditional machine learning design manner, we extract various features specific to each task, apply a neural model, and then compose a full pipeline which constructs the SQL query from its parts. Our work provides quantitative results and error analysis for each task, and identifies in detail the reasoning required to generate SQL expressions from natural language questions. This analysis of reasoning informs future models based on neural machine learning.