GASCOM: Graph-based Attentive Semantic Context Modeling for Online Conversation Understanding

Q1 Social Sciences
Vibhor Agarwal , Yu Chen , Nishanth Sastry
{"title":"GASCOM: Graph-based Attentive Semantic Context Modeling for Online Conversation Understanding","authors":"Vibhor Agarwal ,&nbsp;Yu Chen ,&nbsp;Nishanth Sastry","doi":"10.1016/j.osnem.2024.100290","DOIUrl":null,"url":null,"abstract":"<div><div>Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). However, online conversations typically unfold over a series of posts and replies to those posts, forming a tree structure within which individual posts may refer to semantic context from elsewhere in the tree. Such semantic cross-referencing makes it difficult to understand a single post by itself; yet considering the entire conversation tree is not only difficult to scale but can also be misleading as a single conversation may have several distinct threads or points, not all of which are relevant to the post being considered. In this paper, we propose a <strong>G</strong>raph-based <strong>A</strong>ttentive <strong>S</strong>emantic <strong>CO</strong>ntext <strong>M</strong>odeling (GASCOM) framework for online conversation understanding. Specifically, we design two novel algorithms that utilize both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation. We further design a <em>token-level</em> multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modelling. Using this semantic conversational context, we re-examine two well-studied problems: polarity prediction and hate speech detection. Our proposed framework significantly outperforms state-of-the-art methods on both tasks, improving macro-F1 scores by 4.5% for polarity prediction and by 5% for hate speech detection. The GASCOM context weights also enhance interpretability.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Social Networks and Media","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468696424000156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). However, online conversations typically unfold over a series of posts and replies to those posts, forming a tree structure within which individual posts may refer to semantic context from elsewhere in the tree. Such semantic cross-referencing makes it difficult to understand a single post by itself; yet considering the entire conversation tree is not only difficult to scale but can also be misleading as a single conversation may have several distinct threads or points, not all of which are relevant to the post being considered. In this paper, we propose a Graph-based Attentive Semantic COntext Modeling (GASCOM) framework for online conversation understanding. Specifically, we design two novel algorithms that utilize both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation. We further design a token-level multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modelling. Using this semantic conversational context, we re-examine two well-studied problems: polarity prediction and hate speech detection. Our proposed framework significantly outperforms state-of-the-art methods on both tasks, improving macro-F1 scores by 4.5% for polarity prediction and by 5% for hate speech detection. The GASCOM context weights also enhance interpretability.
GASCOM:基于图的细心语义上下文建模用于在线对话理解
在线对话理解是一个重要而又具有挑战性的 NLP 问题,它有许多有用的应用(如仇恨言论检测)。然而,在线会话通常由一系列帖子和对这些帖子的回复展开,形成一个树状结构,其中单个帖子可能会引用树状结构中其他地方的语义上下文。这种语义交叉引用使得理解单个帖子本身变得困难;然而,考虑整个对话树不仅难以扩展,而且还可能产生误导,因为单个对话可能有多个不同的线程或要点,但并非所有线程或要点都与所考虑的帖子相关。在本文中,我们为在线对话理解提出了一个基于图形的语义建模(GASCOM)框架。具体来说,我们设计了两种新颖的算法,既利用在线会话的图结构,又利用单个帖子的语义信息,从整个会话中检索相关的上下文节点。我们进一步设计了一种标记级多头图关注机制,对不同选定语境语篇中的不同标记给予不同的关注,从而建立细粒度的会话语境模型。利用这种语义对话上下文,我们重新研究了两个经过充分研究的问题:极性预测和仇恨言论检测。在这两项任务中,我们提出的框架明显优于最先进的方法,在极性预测和仇恨言论检测中,宏 F1 分数分别提高了 4.5% 和 5%。GASCOM 上下文权重还增强了可解释性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Online Social Networks and Media
Online Social Networks and Media Social Sciences-Communication
CiteScore
10.60
自引率
0.00%
发文量
32
审稿时长
44 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信