{"title":"GLAF: Global-and-Local Attention Flow Model for Question Answering","authors":"Shao-Hua Sun","doi":"10.1145/3571560.3571570","DOIUrl":null,"url":null,"abstract":"Question answering is one of the well-studied tasks in the natural language processing(NLP) community, which aims to secure an answer span from a given document and query. Previous attempts decomposed this task into two subtask, i.e., understanding the semantic information of the given document and query, then finding a reasonable textual span within the document as the corresponding answer. However, one of the major drawbacks of the previous works is lack of extracting sufficient semantics that is buried within the input. To alleviate the issue above, in this paper, we propose a global-local attention flow model to take advantage of the semantic features from different aspects and reduce the redundancy of model encoder. Experimental results on the SQUAD dataset shows that our model outperforms the baseline models, which proves the effectiveness of the proposed method.","PeriodicalId":143909,"journal":{"name":"Proceedings of the 6th International Conference on Advances in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Advances in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3571560.3571570","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Question answering is one of the well-studied tasks in the natural language processing(NLP) community, which aims to secure an answer span from a given document and query. Previous attempts decomposed this task into two subtask, i.e., understanding the semantic information of the given document and query, then finding a reasonable textual span within the document as the corresponding answer. However, one of the major drawbacks of the previous works is lack of extracting sufficient semantics that is buried within the input. To alleviate the issue above, in this paper, we propose a global-local attention flow model to take advantage of the semantic features from different aspects and reduce the redundancy of model encoder. Experimental results on the SQUAD dataset shows that our model outperforms the baseline models, which proves the effectiveness of the proposed method.