{"title":"Multimodal deep neural nets for detecting humor in TV sitcoms","authors":"D. Bertero, Pascale Fung","doi":"10.1109/SLT.2016.7846293","DOIUrl":null,"url":null,"abstract":"We propose a novel approach of combining acoustic and language features to predict humor in dialogues with a deep neural network. We analyze data from three popular TV-sitcoms whose canned laughters give an indication of when the audience would react. We model the setup-punchline sequential relation of conversational humor with a Long Short-Term Memory network, with utterance encodings obtained from two Convolutional Neural Networks, one to model word-level language features and the other to model frame-level acoustic and prosodic features. Our neural network framework is able to improve the F-score of over 5% over a Conditional Random Field baseline trained on a similar acoustic and language feature combination, achieving a much higher recall. It is also more effective over a language features-only setting, with a F-score of 10% higher. It also has a good generalization performance, reaching in most cases precision values of over 70% when trained and tested over different sitcoms.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT.2016.7846293","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
We propose a novel approach of combining acoustic and language features to predict humor in dialogues with a deep neural network. We analyze data from three popular TV-sitcoms whose canned laughters give an indication of when the audience would react. We model the setup-punchline sequential relation of conversational humor with a Long Short-Term Memory network, with utterance encodings obtained from two Convolutional Neural Networks, one to model word-level language features and the other to model frame-level acoustic and prosodic features. Our neural network framework is able to improve the F-score of over 5% over a Conditional Random Field baseline trained on a similar acoustic and language feature combination, achieving a much higher recall. It is also more effective over a language features-only setting, with a F-score of 10% higher. It also has a good generalization performance, reaching in most cases precision values of over 70% when trained and tested over different sitcoms.