An Enhanced Key-utterance Interactive Model with Decouped Auxiliary Tasks for Multi-party Dialogue Reading Comprehension

Xingyu Zhu, Jin Wang, Xuejie Zhang
{"title":"An Enhanced Key-utterance Interactive Model with Decouped Auxiliary Tasks for Multi-party Dialogue Reading Comprehension","authors":"Xingyu Zhu, Jin Wang, Xuejie Zhang","doi":"10.1109/IJCNN55064.2022.9892162","DOIUrl":null,"url":null,"abstract":"Multi-party dialogue machine reading comprehension (MRC) is more challenging than plain text MRC because it involves multiple speakers, more complex information flow interaction, and discourse structure. Previously most researchers focus on decoupling the speaker-aware and utterance-aware information to overcome such difficulties. Based on this, the self- and pseudo-self-supervised prediction auxiliary tasks on speakers and key-utterance are proposed. However, the information interaction among key-utterance, question, and dialogue context was ignored in these works, and there should also be a constraint between the two additional tasks. Herein, we proposed an enhanced key-utterance interaction model. It takes the key-utterance predicted by auxiliary task as prior information. Moreover, the co-attention mechanism is used to capture the critical information interaction among dialogue contexts, question, and key-utterance from the two perspectives of question-to-dialogue and dialogue-to-question, respectively. In addition, we introduced minimizing mutual information (MI) between the two auxiliary tasks to prevent mutual interference and overlap of information. Experimental results show that the proposed model achieves significant improvements than the dialogue MRC baseline models in Molweni and FriendsQA datasets.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9892162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Multi-party dialogue machine reading comprehension (MRC) is more challenging than plain text MRC because it involves multiple speakers, more complex information flow interaction, and discourse structure. Previously most researchers focus on decoupling the speaker-aware and utterance-aware information to overcome such difficulties. Based on this, the self- and pseudo-self-supervised prediction auxiliary tasks on speakers and key-utterance are proposed. However, the information interaction among key-utterance, question, and dialogue context was ignored in these works, and there should also be a constraint between the two additional tasks. Herein, we proposed an enhanced key-utterance interaction model. It takes the key-utterance predicted by auxiliary task as prior information. Moreover, the co-attention mechanism is used to capture the critical information interaction among dialogue contexts, question, and key-utterance from the two perspectives of question-to-dialogue and dialogue-to-question, respectively. In addition, we introduced minimizing mutual information (MI) between the two auxiliary tasks to prevent mutual interference and overlap of information. Experimental results show that the proposed model achieves significant improvements than the dialogue MRC baseline models in Molweni and FriendsQA datasets.
一种具有解耦辅助任务的增强键-话语交互模型用于多方对话阅读理解
多方对话机器阅读理解(MRC)比纯文本机器阅读理解更具挑战性,因为它涉及多个说话者,更复杂的信息流交互和话语结构。为了克服这一困难,以往的研究主要集中在将说话人意识和话语意识信息解耦。在此基础上,提出了基于说话人和关键话语的自监督和伪自监督预测辅助任务。然而,在这些作品中,关键话语、问题和对话语境之间的信息交互被忽略了,这两个额外的任务之间也应该有一个约束。在此,我们提出了一个增强的关键话语交互模型。它将辅助任务预测的关键话语作为先验信息。此外,本文利用共同注意机制,分别从问题到对话和对话到问题两个角度捕捉对话语境、问题和关键话语之间的关键信息交互。此外,我们引入了最小化两个辅助任务之间的互信息(MI),以防止信息的相互干扰和重叠。实验结果表明,该模型在Molweni和FriendsQA数据集上比对话MRC基线模型取得了显著的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信