{"title":"Classifying Community QA Questions That Contain an Image","authors":"Kenta Tamaki, Riku Togashi, Sosuke Kato, Sumio Fujita, Hideyuki Maeda, T. Sakai","doi":"10.1145/3234944.3234948","DOIUrl":null,"url":null,"abstract":"We consider the problem of automatically assigning a category to a given question posted to a Community Question Answering (CQA) site, where the question contains not only text but also an image. For example, CQA users may post a photograph of a dress and ask the community \"Is this appropriate for a wedding?'' where the appropriate category for this question might be \"Manners, Ceremonial occasions.'' We tackle this problem using Convolutional Neural Networks with a DualNet architecture for combining the image and text representations. Our experiments with real data from Yahoo Chiebukuro and crowdsourced gold-standard categories show that the DualNet approach outperforms a text-only baseline ($p=.0000$), a sum-and-product baseline ($p=.0000$), Multimodal Compact Bilinear pooling ($p=.0000$), and a combination of sum-and-product and MCB ($p=.0000$), where the p-values are based on a randomised Tukey Honestly Significant Difference test with $B = 5000$ trials.","PeriodicalId":193631,"journal":{"name":"Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3234944.3234948","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
We consider the problem of automatically assigning a category to a given question posted to a Community Question Answering (CQA) site, where the question contains not only text but also an image. For example, CQA users may post a photograph of a dress and ask the community "Is this appropriate for a wedding?'' where the appropriate category for this question might be "Manners, Ceremonial occasions.'' We tackle this problem using Convolutional Neural Networks with a DualNet architecture for combining the image and text representations. Our experiments with real data from Yahoo Chiebukuro and crowdsourced gold-standard categories show that the DualNet approach outperforms a text-only baseline ($p=.0000$), a sum-and-product baseline ($p=.0000$), Multimodal Compact Bilinear pooling ($p=.0000$), and a combination of sum-and-product and MCB ($p=.0000$), where the p-values are based on a randomised Tukey Honestly Significant Difference test with $B = 5000$ trials.