{"title":"基于树- lstm的多头自注意情感分析模型","authors":"Lei Li, Yijian Pei, Chenyang Jin","doi":"10.1117/12.2604779","DOIUrl":null,"url":null,"abstract":"In the natural language processing task.We need to extract information from the tree topology. Sentence structure can be achieved by the dependency tree or constituency tree structure to represent.The LSTM can handle sequential information (equivalent to a sequential list), but not tree-structured data.Multi-headed self-attention is used in this model. The main purpose of this model is to reduce the computation and improve the parallel efficiency without damaging the effect of the model.Eliminates the CNN and RNN respectively corresponding to the large amount of calculation, parameter and unable to the disadvantage of parallel computing,keep parallel computing and long distance information.The model combines multi-headed self-attention and tree-LSTM, and uses maxout neurons in the output position.The accuracy of the model on SST was 89%.","PeriodicalId":90079,"journal":{"name":"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging","volume":"1 1","pages":"119130C - 119130C-7"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The sentiment analysis model with multi-head self-attention and Tree-LSTM\",\"authors\":\"Lei Li, Yijian Pei, Chenyang Jin\",\"doi\":\"10.1117/12.2604779\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the natural language processing task.We need to extract information from the tree topology. Sentence structure can be achieved by the dependency tree or constituency tree structure to represent.The LSTM can handle sequential information (equivalent to a sequential list), but not tree-structured data.Multi-headed self-attention is used in this model. The main purpose of this model is to reduce the computation and improve the parallel efficiency without damaging the effect of the model.Eliminates the CNN and RNN respectively corresponding to the large amount of calculation, parameter and unable to the disadvantage of parallel computing,keep parallel computing and long distance information.The model combines multi-headed self-attention and tree-LSTM, and uses maxout neurons in the output position.The accuracy of the model on SST was 89%.\",\"PeriodicalId\":90079,\"journal\":{\"name\":\"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging\",\"volume\":\"1 1\",\"pages\":\"119130C - 119130C-7\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2604779\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2604779","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The sentiment analysis model with multi-head self-attention and Tree-LSTM
In the natural language processing task.We need to extract information from the tree topology. Sentence structure can be achieved by the dependency tree or constituency tree structure to represent.The LSTM can handle sequential information (equivalent to a sequential list), but not tree-structured data.Multi-headed self-attention is used in this model. The main purpose of this model is to reduce the computation and improve the parallel efficiency without damaging the effect of the model.Eliminates the CNN and RNN respectively corresponding to the large amount of calculation, parameter and unable to the disadvantage of parallel computing,keep parallel computing and long distance information.The model combines multi-headed self-attention and tree-LSTM, and uses maxout neurons in the output position.The accuracy of the model on SST was 89%.