You Zhang;Jin Wang;Liang-Chih Yu;Dan Xu;Xuejie Zhang
{"title":"基于属性的个性化情感分析注入转换器","authors":"You Zhang;Jin Wang;Liang-Chih Yu;Dan Xu;Xuejie Zhang","doi":"10.1109/TETCI.2024.3369323","DOIUrl":null,"url":null,"abstract":"Personal attributes have been proven to be useful for sentiment analysis. However, previous models of learning attribute-specific language representations are suboptimal because only context- or content-wise injection is adopted. This study proposes a transformer structure with a combination of both context- and content-wise injections based on a well pretrained transformer encoder. For context-wise injection, self-interactive attention is implemented by incorporating personal attributes into a multi-head attention. For the content-wise perspective, an attribute-based layer normalization is used to align text representation with personal attributes. In particular, the proposed transformer layer can be a universal layer compatible with the original Google Transformer layer. Instead of training from scratch, the proposed Transformer layer can be initialized from a well pre-trained checkpoint for downstream tasks. Extensive experiments were conducted on three benchmarks of document-level sentiment analysis, including IMDB, Yelp-2013 and Yelp-2014. The results show that the proposed method outperforms the previous methods for personalized sentiment analysis, demonstrating that the combination of both context- and content-wise injections can facilitate model learning for attribute-specific language representations.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attribute-Based Injection Transformer for Personalized Sentiment Analysis\",\"authors\":\"You Zhang;Jin Wang;Liang-Chih Yu;Dan Xu;Xuejie Zhang\",\"doi\":\"10.1109/TETCI.2024.3369323\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Personal attributes have been proven to be useful for sentiment analysis. However, previous models of learning attribute-specific language representations are suboptimal because only context- or content-wise injection is adopted. This study proposes a transformer structure with a combination of both context- and content-wise injections based on a well pretrained transformer encoder. For context-wise injection, self-interactive attention is implemented by incorporating personal attributes into a multi-head attention. For the content-wise perspective, an attribute-based layer normalization is used to align text representation with personal attributes. In particular, the proposed transformer layer can be a universal layer compatible with the original Google Transformer layer. Instead of training from scratch, the proposed Transformer layer can be initialized from a well pre-trained checkpoint for downstream tasks. Extensive experiments were conducted on three benchmarks of document-level sentiment analysis, including IMDB, Yelp-2013 and Yelp-2014. The results show that the proposed method outperforms the previous methods for personalized sentiment analysis, demonstrating that the combination of both context- and content-wise injections can facilitate model learning for attribute-specific language representations.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10466778/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10466778/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Attribute-Based Injection Transformer for Personalized Sentiment Analysis
Personal attributes have been proven to be useful for sentiment analysis. However, previous models of learning attribute-specific language representations are suboptimal because only context- or content-wise injection is adopted. This study proposes a transformer structure with a combination of both context- and content-wise injections based on a well pretrained transformer encoder. For context-wise injection, self-interactive attention is implemented by incorporating personal attributes into a multi-head attention. For the content-wise perspective, an attribute-based layer normalization is used to align text representation with personal attributes. In particular, the proposed transformer layer can be a universal layer compatible with the original Google Transformer layer. Instead of training from scratch, the proposed Transformer layer can be initialized from a well pre-trained checkpoint for downstream tasks. Extensive experiments were conducted on three benchmarks of document-level sentiment analysis, including IMDB, Yelp-2013 and Yelp-2014. The results show that the proposed method outperforms the previous methods for personalized sentiment analysis, demonstrating that the combination of both context- and content-wise injections can facilitate model learning for attribute-specific language representations.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.