{"title":"Multi-Document Answer Generation for Non-Factoid Questions","authors":"Valeriia Bolotova-Baranova","doi":"10.1145/3397271.3401449","DOIUrl":null,"url":null,"abstract":"The current research will be devoted to the challenging and under-investigated task of multi-source answer generation for complex non-factoid questions. We will start with experimenting with generative models on one particular type of non-factoid questions - instrumental/procedural questions which often start with \"how-to\". For this, a new dataset, comprised of more than 100,000 QA-pairs which were crawled from a dedicated web-resource where each answer has a set of references to the articles it was written upon, will be used. We will also compare different ways of model evaluation to choose a metric which better correlates with human assessment. To be able to do this, the way people evaluate answers to non-factoid questions and set some formal criteria of what makes a good quality answer is needed to be understood. Eye-tracking and crowdsourcing methods will be employed to study how users interact with answers and evaluate them, and how the answer features correlate with task complexity. We hope that our research will help to redefine the way users interact and work with search engines so as to transform IR finally into the answer retrieval systems that users have always desired.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3397271.3401449","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The current research will be devoted to the challenging and under-investigated task of multi-source answer generation for complex non-factoid questions. We will start with experimenting with generative models on one particular type of non-factoid questions - instrumental/procedural questions which often start with "how-to". For this, a new dataset, comprised of more than 100,000 QA-pairs which were crawled from a dedicated web-resource where each answer has a set of references to the articles it was written upon, will be used. We will also compare different ways of model evaluation to choose a metric which better correlates with human assessment. To be able to do this, the way people evaluate answers to non-factoid questions and set some formal criteria of what makes a good quality answer is needed to be understood. Eye-tracking and crowdsourcing methods will be employed to study how users interact with answers and evaluate them, and how the answer features correlate with task complexity. We hope that our research will help to redefine the way users interact and work with search engines so as to transform IR finally into the answer retrieval systems that users have always desired.