{"title":"基于改进目标注意机制的汉语解释性意见关系识别","authors":"X. Cao, Chenghao Zhu, Chengguo Lv","doi":"10.1109/AEMCSE50948.2020.00126","DOIUrl":null,"url":null,"abstract":"Opinion relationship recognition is an important part of the opinion mining task. Its main purpose is to extract the opinion element tuple from the user comment data and identify the relationship between them, such as evaluation object, evaluation content, opinion explanation, opinion object. Because the comments of the network having are characterized by randomness, diversity of opinions and different formats, it will become more difficult for the opinion mining task. If we can extract the interrelationships between the various explanatory opinion elements, it not only makes subsequent tasks easier but also applies its extracted results to other related tasks. For example, applying the opinion seven-tuple from the opinion extraction task to the text summary generation task can greatly improve the effectiveness of the text summary generation task. In this paper, we have improved on the traditional LSTM-Attention model and proposed an opinion relationship recognition framework based on improved Target Attention Mechanism. Also, we conducted experiments in two different domains, and the experimental results show that the performance has been effectively improved in two domains. We also explored two different pre-training strategies, Word2vec and Elmo, to further analyze the impact of pre-training on this experiment.","PeriodicalId":246841,"journal":{"name":"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Chinese Explanatory Opinion Relationship Recognition Based on Improved Target Attention Mechanism\",\"authors\":\"X. Cao, Chenghao Zhu, Chengguo Lv\",\"doi\":\"10.1109/AEMCSE50948.2020.00126\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Opinion relationship recognition is an important part of the opinion mining task. Its main purpose is to extract the opinion element tuple from the user comment data and identify the relationship between them, such as evaluation object, evaluation content, opinion explanation, opinion object. Because the comments of the network having are characterized by randomness, diversity of opinions and different formats, it will become more difficult for the opinion mining task. If we can extract the interrelationships between the various explanatory opinion elements, it not only makes subsequent tasks easier but also applies its extracted results to other related tasks. For example, applying the opinion seven-tuple from the opinion extraction task to the text summary generation task can greatly improve the effectiveness of the text summary generation task. In this paper, we have improved on the traditional LSTM-Attention model and proposed an opinion relationship recognition framework based on improved Target Attention Mechanism. Also, we conducted experiments in two different domains, and the experimental results show that the performance has been effectively improved in two domains. We also explored two different pre-training strategies, Word2vec and Elmo, to further analyze the impact of pre-training on this experiment.\",\"PeriodicalId\":246841,\"journal\":{\"name\":\"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)\",\"volume\":\"110 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AEMCSE50948.2020.00126\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AEMCSE50948.2020.00126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Chinese Explanatory Opinion Relationship Recognition Based on Improved Target Attention Mechanism
Opinion relationship recognition is an important part of the opinion mining task. Its main purpose is to extract the opinion element tuple from the user comment data and identify the relationship between them, such as evaluation object, evaluation content, opinion explanation, opinion object. Because the comments of the network having are characterized by randomness, diversity of opinions and different formats, it will become more difficult for the opinion mining task. If we can extract the interrelationships between the various explanatory opinion elements, it not only makes subsequent tasks easier but also applies its extracted results to other related tasks. For example, applying the opinion seven-tuple from the opinion extraction task to the text summary generation task can greatly improve the effectiveness of the text summary generation task. In this paper, we have improved on the traditional LSTM-Attention model and proposed an opinion relationship recognition framework based on improved Target Attention Mechanism. Also, we conducted experiments in two different domains, and the experimental results show that the performance has been effectively improved in two domains. We also explored two different pre-training strategies, Word2vec and Elmo, to further analyze the impact of pre-training on this experiment.