{"title":"理解和回答不完整的问题","authors":"Angus Addlesee, Marco Damonte","doi":"10.1145/3571884.3597133","DOIUrl":null,"url":null,"abstract":"Voice assistants interrupt people when they pause mid-question, a frustrating interaction that requires the full repetition of the entire question again. This impacts all users, but particularly people with cognitive impairments. In human-human conversation, these situations are recovered naturally as people understand the words that were uttered. In this paper we build answer pipelines which parse incomplete questions and repair them following human recovery strategies. We evaluated these pipelines on our new corpus, SLUICE. It contains 21,000 interrupted questions, from LC-QuAD 2.0 and QALD-9-plus, paired with their underspecified SPARQL queries. Compared to a system that is given the full question, our best partial understanding pipeline answered only 0.77% fewer questions. Results show that our pipeline correctly identifies what information is required to provide an answer but is not yet provided by the incomplete question. It also accurately identifies where that missing information belongs in the semantic structure of the question.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Understanding and Answering Incomplete Questions\",\"authors\":\"Angus Addlesee, Marco Damonte\",\"doi\":\"10.1145/3571884.3597133\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Voice assistants interrupt people when they pause mid-question, a frustrating interaction that requires the full repetition of the entire question again. This impacts all users, but particularly people with cognitive impairments. In human-human conversation, these situations are recovered naturally as people understand the words that were uttered. In this paper we build answer pipelines which parse incomplete questions and repair them following human recovery strategies. We evaluated these pipelines on our new corpus, SLUICE. It contains 21,000 interrupted questions, from LC-QuAD 2.0 and QALD-9-plus, paired with their underspecified SPARQL queries. Compared to a system that is given the full question, our best partial understanding pipeline answered only 0.77% fewer questions. Results show that our pipeline correctly identifies what information is required to provide an answer but is not yet provided by the incomplete question. It also accurately identifies where that missing information belongs in the semantic structure of the question.\",\"PeriodicalId\":127379,\"journal\":{\"name\":\"Proceedings of the 5th International Conference on Conversational User Interfaces\",\"volume\":\"70 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th International Conference on Conversational User Interfaces\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3571884.3597133\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Conversational User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3571884.3597133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Voice assistants interrupt people when they pause mid-question, a frustrating interaction that requires the full repetition of the entire question again. This impacts all users, but particularly people with cognitive impairments. In human-human conversation, these situations are recovered naturally as people understand the words that were uttered. In this paper we build answer pipelines which parse incomplete questions and repair them following human recovery strategies. We evaluated these pipelines on our new corpus, SLUICE. It contains 21,000 interrupted questions, from LC-QuAD 2.0 and QALD-9-plus, paired with their underspecified SPARQL queries. Compared to a system that is given the full question, our best partial understanding pipeline answered only 0.77% fewer questions. Results show that our pipeline correctly identifies what information is required to provide an answer but is not yet provided by the incomplete question. It also accurately identifies where that missing information belongs in the semantic structure of the question.