{"title":"Understanding and classifying image tweets","authors":"Tao Chen, Dongyuan Lu, Min-Yen Kan, Peng Cui","doi":"10.1145/2502081.2502203","DOIUrl":null,"url":null,"abstract":"Social media platforms now allow users to share images alongside their textual posts. These image tweets make up a fast-growing percentage of tweets, but have not been studied in depth unlike their text-only counterparts. We study a large corpus of image tweets in order to uncover what people post about and the correlation between the tweet's image and its text. We show that an important functional distinction is between visually-relevant and visually-irrelevant tweets, and that we can successfully build an automated classifier utilizing text, image and social context features to distinguish these two classes, obtaining a macro F1 of 70.5%.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"117 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"81","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 21st ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2502081.2502203","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 81
Abstract
Social media platforms now allow users to share images alongside their textual posts. These image tweets make up a fast-growing percentage of tweets, but have not been studied in depth unlike their text-only counterparts. We study a large corpus of image tweets in order to uncover what people post about and the correlation between the tweet's image and its text. We show that an important functional distinction is between visually-relevant and visually-irrelevant tweets, and that we can successfully build an automated classifier utilizing text, image and social context features to distinguish these two classes, obtaining a macro F1 of 70.5%.