{"title":"对话摘要的话语关系感知模型:一个对话摘要的组合模型","authors":"Huichao Li","doi":"10.1145/3523286.3524577","DOIUrl":null,"url":null,"abstract":"Dialogue summarization, which aims to make summarization on the given dialogue automatically, is a challenging task in natural language processing. Compared to text summarization, it needs to consider the interaction between speakers and the colloquialization of expression to generate better summarization. Many existing methods treat dialogue as a plain sequence of text and simply ignore the structural information, which could be important for summarization generation of the dialogue. In this paper, to alleviate the above challenges, we propose a deep-learning-based method that combines a serialized model and a graph model. More specifically, we utilize a Sequence to Sequence (Seq2Seq) as the backbone to cope with the informal text and a Graph Neural Network (GNN) to take advantage of the structural information. The two models are combined by sharing the key information with each other. Besides, special attention is drawn to the specific speaker in our proposed method. Extensive experiments have shown the effectiveness of our proposed method.","PeriodicalId":268165,"journal":{"name":"2022 2nd International Conference on Bioinformatics and Intelligent Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"URAMDS: Utterances Relation Aware Model for Dialogue Summarization: A Combined Model for Dialogue Summarization\",\"authors\":\"Huichao Li\",\"doi\":\"10.1145/3523286.3524577\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dialogue summarization, which aims to make summarization on the given dialogue automatically, is a challenging task in natural language processing. Compared to text summarization, it needs to consider the interaction between speakers and the colloquialization of expression to generate better summarization. Many existing methods treat dialogue as a plain sequence of text and simply ignore the structural information, which could be important for summarization generation of the dialogue. In this paper, to alleviate the above challenges, we propose a deep-learning-based method that combines a serialized model and a graph model. More specifically, we utilize a Sequence to Sequence (Seq2Seq) as the backbone to cope with the informal text and a Graph Neural Network (GNN) to take advantage of the structural information. The two models are combined by sharing the key information with each other. Besides, special attention is drawn to the specific speaker in our proposed method. Extensive experiments have shown the effectiveness of our proposed method.\",\"PeriodicalId\":268165,\"journal\":{\"name\":\"2022 2nd International Conference on Bioinformatics and Intelligent Computing\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 2nd International Conference on Bioinformatics and Intelligent Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3523286.3524577\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Bioinformatics and Intelligent Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3523286.3524577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
URAMDS: Utterances Relation Aware Model for Dialogue Summarization: A Combined Model for Dialogue Summarization
Dialogue summarization, which aims to make summarization on the given dialogue automatically, is a challenging task in natural language processing. Compared to text summarization, it needs to consider the interaction between speakers and the colloquialization of expression to generate better summarization. Many existing methods treat dialogue as a plain sequence of text and simply ignore the structural information, which could be important for summarization generation of the dialogue. In this paper, to alleviate the above challenges, we propose a deep-learning-based method that combines a serialized model and a graph model. More specifically, we utilize a Sequence to Sequence (Seq2Seq) as the backbone to cope with the informal text and a Graph Neural Network (GNN) to take advantage of the structural information. The two models are combined by sharing the key information with each other. Besides, special attention is drawn to the specific speaker in our proposed method. Extensive experiments have shown the effectiveness of our proposed method.