Computer-generated Humour Based on GPT-2

Yuchen Su
{"title":"Computer-generated Humour Based on GPT-2","authors":"Yuchen Su","doi":"10.1109/ICDSCA56264.2022.9987901","DOIUrl":null,"url":null,"abstract":"Humour generation has always been a huge challenge in the area of computational humour. In this paper, we explore how to generate jokes based on keywords by fine-tuning GPT-2 pre-training model based on keywords task and comparing them with the LSTM-based encoder-decoder model. We trained the model by the short jokes of Conan O'Brien and the puns dataset of Yang et al. with the help of Pos-Tagger. Then, we evaluate the humour by using human evaluation and the similarity with keywords and jokes by using automatic evaluation. In terms of the final average score, the performance of our model is better than the LSTM-based encoder-decoder model.","PeriodicalId":416983,"journal":{"name":"2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSCA56264.2022.9987901","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Humour generation has always been a huge challenge in the area of computational humour. In this paper, we explore how to generate jokes based on keywords by fine-tuning GPT-2 pre-training model based on keywords task and comparing them with the LSTM-based encoder-decoder model. We trained the model by the short jokes of Conan O'Brien and the puns dataset of Yang et al. with the help of Pos-Tagger. Then, we evaluate the humour by using human evaluation and the similarity with keywords and jokes by using automatic evaluation. In terms of the final average score, the performance of our model is better than the LSTM-based encoder-decoder model.
基于GPT-2的计算机生成幽默
幽默的生成一直是计算幽默领域的一个巨大挑战。本文通过对基于关键词任务的GPT-2预训练模型进行微调,并与基于lstm的编码器-解码器模型进行比较,探讨了如何基于关键词生成笑话。在post - tagger的帮助下,我们使用Conan O'Brien的短笑话和Yang等人的双关语数据集来训练模型。然后采用人工评价的方法对幽默进行评价,采用自动评价的方法对关键词和段子的相似度进行评价。在最终平均分数方面,我们的模型的性能优于基于lstm的编码器-解码器模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信