{"title":"使用基于变压器双向编码器表征 (BERT) 的神经集合模型进行幽默检测","authors":"Rida Miraj, Masaki Aono","doi":"10.1109/ICAICTA53211.2021.9640260","DOIUrl":null,"url":null,"abstract":"A lot of research has been done to aim to find out what makes someone laugh in a text. In recent years, detecting humor in written sentences has shown to be a fascinating and challenging endeavor. We describe a mechanism for identifying humor in brief texts in this paper. We employ a Bidirectional Encoder Representations from Transformers (BERT) architecture because of its benefits in learning from sentence context. Our proposed methodology also uses some other embedding models e.g., Word2Vec or FastText to generate Embeddings for sentences of a given text and uses these Embeddings as inputs in a neural ensemble network. We illustrate the efficacy of this methodology. We significantly reduced our root mean squared error by using this technique.","PeriodicalId":217463,"journal":{"name":"2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Humor Detection Using a Bidirectional Encoder Representations from Transformers (BERT) based Neural Ensemble Model\",\"authors\":\"Rida Miraj, Masaki Aono\",\"doi\":\"10.1109/ICAICTA53211.2021.9640260\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A lot of research has been done to aim to find out what makes someone laugh in a text. In recent years, detecting humor in written sentences has shown to be a fascinating and challenging endeavor. We describe a mechanism for identifying humor in brief texts in this paper. We employ a Bidirectional Encoder Representations from Transformers (BERT) architecture because of its benefits in learning from sentence context. Our proposed methodology also uses some other embedding models e.g., Word2Vec or FastText to generate Embeddings for sentences of a given text and uses these Embeddings as inputs in a neural ensemble network. We illustrate the efficacy of this methodology. We significantly reduced our root mean squared error by using this technique.\",\"PeriodicalId\":217463,\"journal\":{\"name\":\"2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA)\",\"volume\":\"47 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAICTA53211.2021.9640260\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAICTA53211.2021.9640260","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Humor Detection Using a Bidirectional Encoder Representations from Transformers (BERT) based Neural Ensemble Model
A lot of research has been done to aim to find out what makes someone laugh in a text. In recent years, detecting humor in written sentences has shown to be a fascinating and challenging endeavor. We describe a mechanism for identifying humor in brief texts in this paper. We employ a Bidirectional Encoder Representations from Transformers (BERT) architecture because of its benefits in learning from sentence context. Our proposed methodology also uses some other embedding models e.g., Word2Vec or FastText to generate Embeddings for sentences of a given text and uses these Embeddings as inputs in a neural ensemble network. We illustrate the efficacy of this methodology. We significantly reduced our root mean squared error by using this technique.