Multi-Column Convolutional Neural Networks with Causality-Attention for Why-Question Answering

Jong-Hoon Oh, Kentaro Torisawa, Canasai Kruengkrai, R. Iida, Julien Kloetzer
{"title":"Multi-Column Convolutional Neural Networks with Causality-Attention for Why-Question Answering","authors":"Jong-Hoon Oh, Kentaro Torisawa, Canasai Kruengkrai, R. Iida, Julien Kloetzer","doi":"10.1145/3018661.3018737","DOIUrl":null,"url":null,"abstract":"Why-question answering (why-QA) is a task to retrieve answers (or answer passages) to why-questions (e.g., \"why are tsunamis generated?\") from a text archive. Several previously proposed methods for why-QA improved their performance by automatically recognizing causalities that are expressed with such explicit cues as \"because\" in answer passages and using the recognized causalities as a clue for finding proper answers. However, in answer passages, causalities might be implicitly expressed, (i.e., without any explicit cues): \"An earthquake suddenly displaced sea water and a tsunami was generated.\" The previous works did not deal with such implicitly expressed causalities and failed to find proper answers that included the causalities. We improve why-QA based on the following two ideas. First, implicitly expressed causalities in one text might be expressed in other texts with explicit cues. If we can automatically recognize such explicitly expressed causalities from a text archive and use them to complement the implicitly expressed causalities in an answer passage, we can improve why-QA. Second, the causes of similar events tend to be described with a similar set of words (e.g., \"seismic energy\" and \"tectonic plates\" for \"the Great East Japan Earthquake\" and \"the 1906 San Francisco Earthquake\"). As such, even if we cannot find in a text archive any explicitly expressed cause of an event (e.g., \"the Great East Japan Earthquake\") expressed in a question (e.g., \"Why did the Great East Japan earthquake happen?\"), we might be able to identify its implicitly expressed causes with a set of words (e.g., \"tectonic plates\") that appear in the explicitly expressed cause of a similar event (e.g., \"the 1906 San Francisco Earthquake\"). We implemented these two ideas in our multi-column convolutional neural networks with a novel attention mechanism, which we call causality attention. Through experiments on Japanese why-QA, we confirmed that our proposed method outperformed the state-of-the-art systems.","PeriodicalId":344017,"journal":{"name":"Proceedings of the Tenth ACM International Conference on Web Search and Data Mining","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Tenth ACM International Conference on Web Search and Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3018661.3018737","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39

Abstract

Why-question answering (why-QA) is a task to retrieve answers (or answer passages) to why-questions (e.g., "why are tsunamis generated?") from a text archive. Several previously proposed methods for why-QA improved their performance by automatically recognizing causalities that are expressed with such explicit cues as "because" in answer passages and using the recognized causalities as a clue for finding proper answers. However, in answer passages, causalities might be implicitly expressed, (i.e., without any explicit cues): "An earthquake suddenly displaced sea water and a tsunami was generated." The previous works did not deal with such implicitly expressed causalities and failed to find proper answers that included the causalities. We improve why-QA based on the following two ideas. First, implicitly expressed causalities in one text might be expressed in other texts with explicit cues. If we can automatically recognize such explicitly expressed causalities from a text archive and use them to complement the implicitly expressed causalities in an answer passage, we can improve why-QA. Second, the causes of similar events tend to be described with a similar set of words (e.g., "seismic energy" and "tectonic plates" for "the Great East Japan Earthquake" and "the 1906 San Francisco Earthquake"). As such, even if we cannot find in a text archive any explicitly expressed cause of an event (e.g., "the Great East Japan Earthquake") expressed in a question (e.g., "Why did the Great East Japan earthquake happen?"), we might be able to identify its implicitly expressed causes with a set of words (e.g., "tectonic plates") that appear in the explicitly expressed cause of a similar event (e.g., "the 1906 San Francisco Earthquake"). We implemented these two ideas in our multi-column convolutional neural networks with a novel attention mechanism, which we call causality attention. Through experiments on Japanese why-QA, we confirmed that our proposed method outperformed the state-of-the-art systems.
基于因果关系的多列卷积神经网络
为什么问题回答(why- qa)是从文本存档中检索为什么问题(例如,“为什么会产生海啸?”)的答案(或回答段落)的任务。之前提出的一些why-QA方法通过自动识别答案段落中以“因为”等明确线索表达的因果关系,并使用已识别的因果关系作为寻找正确答案的线索,从而提高了它们的性能。然而,在回答段落中,可能隐含地表达了因果关系(即,没有任何明确的线索):“地震突然转移了海水,产生了海啸。”以前的作品没有处理这种含蓄表达的因果关系,也没有找到包含因果关系的正确答案。我们基于以下两个理念改进why-QA。首先,一个文本中隐含表达的因果关系可能会在其他文本中以明确的线索表达。如果我们能够从文本存档中自动识别出这种明确表达的因果关系,并使用它们来补充答案段落中隐含表达的因果关系,我们就可以改进why-QA。其次,类似事件的原因往往用类似的一组词来描述(例如,“东日本大地震”和“1906年旧金山地震”用“地震能量”和“构造板块”来描述)。因此,即使我们不能在文本档案中找到任何明确表达的事件原因(例如,“东日本大地震”),在一个问题中(例如,“为什么东日本大地震发生?”),我们也可以用一组出现在明确表达的类似事件原因(例如,“1906年旧金山地震”)中的单词(例如,“构造板块”)来识别其隐含表达的原因。我们在我们的多列卷积神经网络中实现了这两个想法,并采用了一种新的注意机制,我们称之为因果关系注意。通过对日本why-QA的实验,我们证实了我们提出的方法优于最先进的系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信