{"title":"基于语料库的最大熵多标签情感分类","authors":"Ye Wu, F. Ren","doi":"10.5220/0002169901030110","DOIUrl":null,"url":null,"abstract":"Thanks to the Internet, blog posts online have emerged as a new grassroots medium, which create a huge resource of text-based emotion. Comparing to other ideal experimental settings, what we obtained from the World Wide Web evolve and respond more to real-world events. In this paper, our corpus consists of a collection of blog posts, which annotated as multi-label to make the classification of emotion more precise than the single-label set of basic emotions. Employing a maximum entropy classifier, our results show that the emotions can be clearly separated by the proposed method. Additionally, we show that the microaverage F1-scores of multi-label detection increase when the mount of available training data further increases.","PeriodicalId":378427,"journal":{"name":"International Workshop on Natural Language Processing and Cognitive Science","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Corpus-based Multi-label Emotion Classification using Maximum Entropy\",\"authors\":\"Ye Wu, F. Ren\",\"doi\":\"10.5220/0002169901030110\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Thanks to the Internet, blog posts online have emerged as a new grassroots medium, which create a huge resource of text-based emotion. Comparing to other ideal experimental settings, what we obtained from the World Wide Web evolve and respond more to real-world events. In this paper, our corpus consists of a collection of blog posts, which annotated as multi-label to make the classification of emotion more precise than the single-label set of basic emotions. Employing a maximum entropy classifier, our results show that the emotions can be clearly separated by the proposed method. Additionally, we show that the microaverage F1-scores of multi-label detection increase when the mount of available training data further increases.\",\"PeriodicalId\":378427,\"journal\":{\"name\":\"International Workshop on Natural Language Processing and Cognitive Science\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Workshop on Natural Language Processing and Cognitive Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0002169901030110\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Workshop on Natural Language Processing and Cognitive Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0002169901030110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Corpus-based Multi-label Emotion Classification using Maximum Entropy
Thanks to the Internet, blog posts online have emerged as a new grassroots medium, which create a huge resource of text-based emotion. Comparing to other ideal experimental settings, what we obtained from the World Wide Web evolve and respond more to real-world events. In this paper, our corpus consists of a collection of blog posts, which annotated as multi-label to make the classification of emotion more precise than the single-label set of basic emotions. Employing a maximum entropy classifier, our results show that the emotions can be clearly separated by the proposed method. Additionally, we show that the microaverage F1-scores of multi-label detection increase when the mount of available training data further increases.