{"title":"GASCOM:基于图的细心语义上下文建模用于在线对话理解","authors":"Vibhor Agarwal , Yu Chen , Nishanth Sastry","doi":"10.1016/j.osnem.2024.100290","DOIUrl":null,"url":null,"abstract":"<div><div>Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). However, online conversations typically unfold over a series of posts and replies to those posts, forming a tree structure within which individual posts may refer to semantic context from elsewhere in the tree. Such semantic cross-referencing makes it difficult to understand a single post by itself; yet considering the entire conversation tree is not only difficult to scale but can also be misleading as a single conversation may have several distinct threads or points, not all of which are relevant to the post being considered. In this paper, we propose a <strong>G</strong>raph-based <strong>A</strong>ttentive <strong>S</strong>emantic <strong>CO</strong>ntext <strong>M</strong>odeling (GASCOM) framework for online conversation understanding. Specifically, we design two novel algorithms that utilize both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation. We further design a <em>token-level</em> multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modelling. Using this semantic conversational context, we re-examine two well-studied problems: polarity prediction and hate speech detection. Our proposed framework significantly outperforms state-of-the-art methods on both tasks, improving macro-F1 scores by 4.5% for polarity prediction and by 5% for hate speech detection. The GASCOM context weights also enhance interpretability.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GASCOM: Graph-based Attentive Semantic Context Modeling for Online Conversation Understanding\",\"authors\":\"Vibhor Agarwal , Yu Chen , Nishanth Sastry\",\"doi\":\"10.1016/j.osnem.2024.100290\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). However, online conversations typically unfold over a series of posts and replies to those posts, forming a tree structure within which individual posts may refer to semantic context from elsewhere in the tree. Such semantic cross-referencing makes it difficult to understand a single post by itself; yet considering the entire conversation tree is not only difficult to scale but can also be misleading as a single conversation may have several distinct threads or points, not all of which are relevant to the post being considered. In this paper, we propose a <strong>G</strong>raph-based <strong>A</strong>ttentive <strong>S</strong>emantic <strong>CO</strong>ntext <strong>M</strong>odeling (GASCOM) framework for online conversation understanding. Specifically, we design two novel algorithms that utilize both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation. We further design a <em>token-level</em> multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modelling. Using this semantic conversational context, we re-examine two well-studied problems: polarity prediction and hate speech detection. Our proposed framework significantly outperforms state-of-the-art methods on both tasks, improving macro-F1 scores by 4.5% for polarity prediction and by 5% for hate speech detection. The GASCOM context weights also enhance interpretability.</div></div>\",\"PeriodicalId\":52228,\"journal\":{\"name\":\"Online Social Networks and Media\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Online Social Networks and Media\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468696424000156\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Social Networks and Media","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468696424000156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
摘要
在线对话理解是一个重要而又具有挑战性的 NLP 问题,它有许多有用的应用(如仇恨言论检测)。然而,在线会话通常由一系列帖子和对这些帖子的回复展开,形成一个树状结构,其中单个帖子可能会引用树状结构中其他地方的语义上下文。这种语义交叉引用使得理解单个帖子本身变得困难;然而,考虑整个对话树不仅难以扩展,而且还可能产生误导,因为单个对话可能有多个不同的线程或要点,但并非所有线程或要点都与所考虑的帖子相关。在本文中,我们为在线对话理解提出了一个基于图形的语义建模(GASCOM)框架。具体来说,我们设计了两种新颖的算法,既利用在线会话的图结构,又利用单个帖子的语义信息,从整个会话中检索相关的上下文节点。我们进一步设计了一种标记级多头图关注机制,对不同选定语境语篇中的不同标记给予不同的关注,从而建立细粒度的会话语境模型。利用这种语义对话上下文,我们重新研究了两个经过充分研究的问题:极性预测和仇恨言论检测。在这两项任务中,我们提出的框架明显优于最先进的方法,在极性预测和仇恨言论检测中,宏 F1 分数分别提高了 4.5% 和 5%。GASCOM 上下文权重还增强了可解释性。
GASCOM: Graph-based Attentive Semantic Context Modeling for Online Conversation Understanding
Online conversation understanding is an important yet challenging NLP problem which has many useful applications (e.g., hate speech detection). However, online conversations typically unfold over a series of posts and replies to those posts, forming a tree structure within which individual posts may refer to semantic context from elsewhere in the tree. Such semantic cross-referencing makes it difficult to understand a single post by itself; yet considering the entire conversation tree is not only difficult to scale but can also be misleading as a single conversation may have several distinct threads or points, not all of which are relevant to the post being considered. In this paper, we propose a Graph-based Attentive Semantic COntext Modeling (GASCOM) framework for online conversation understanding. Specifically, we design two novel algorithms that utilize both the graph structure of the online conversation as well as the semantic information from individual posts for retrieving relevant context nodes from the whole conversation. We further design a token-level multi-head graph attention mechanism to pay different attentions to different tokens from different selected context utterances for fine-grained conversation context modelling. Using this semantic conversational context, we re-examine two well-studied problems: polarity prediction and hate speech detection. Our proposed framework significantly outperforms state-of-the-art methods on both tasks, improving macro-F1 scores by 4.5% for polarity prediction and by 5% for hate speech detection. The GASCOM context weights also enhance interpretability.