Context-aware Image Tweet Modelling and Recommendation

Tao Chen, Xiangnan He, Min-Yen Kan
{"title":"Context-aware Image Tweet Modelling and Recommendation","authors":"Tao Chen, Xiangnan He, Min-Yen Kan","doi":"10.1145/2964284.2964291","DOIUrl":null,"url":null,"abstract":"While efforts have been made on bridging the semantic gap in image understanding, the in situ understanding of social media images is arguably more important but has had less progress. In this work, we enrich the representation of images in image tweets by considering their social context. We argue that in the microblog context, traditional image features, e.g., low-level SIFT or high-level detected objects, are far from adequate in interpreting the necessary semantics latent in image tweets. To bridge this gap, we move from the images' pixels to their context and propose a context-aware image bf tweet modelling (CITING) framework to mine and fuse contextual text to model such social media images' semantics. We start with tweet's intrinsic contexts, namely, 1) text within the image itself and 2) its accompanying text; and then we turn to the extrinsic contexts: 3) the external web page linked to by the tweet's embedded URL, and 4) the Web as a whole. These contexts can be leveraged to benefit many fundamental applications. To demonstrate the effectiveness our framework, we focus on the task of personalized image tweet recommendation, developing a feature-aware matrix factorization framework that encodes the contexts as a part of user interest modelling. Extensive experiments on a large Twitter dataset show that our proposed method significantly improves performance. Finally, to spur future studies, we have released both the code of our recommendation model and our image tweet dataset.","PeriodicalId":140670,"journal":{"name":"Proceedings of the 24th ACM international conference on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"74","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 24th ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2964284.2964291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 74

Abstract

While efforts have been made on bridging the semantic gap in image understanding, the in situ understanding of social media images is arguably more important but has had less progress. In this work, we enrich the representation of images in image tweets by considering their social context. We argue that in the microblog context, traditional image features, e.g., low-level SIFT or high-level detected objects, are far from adequate in interpreting the necessary semantics latent in image tweets. To bridge this gap, we move from the images' pixels to their context and propose a context-aware image bf tweet modelling (CITING) framework to mine and fuse contextual text to model such social media images' semantics. We start with tweet's intrinsic contexts, namely, 1) text within the image itself and 2) its accompanying text; and then we turn to the extrinsic contexts: 3) the external web page linked to by the tweet's embedded URL, and 4) the Web as a whole. These contexts can be leveraged to benefit many fundamental applications. To demonstrate the effectiveness our framework, we focus on the task of personalized image tweet recommendation, developing a feature-aware matrix factorization framework that encodes the contexts as a part of user interest modelling. Extensive experiments on a large Twitter dataset show that our proposed method significantly improves performance. Finally, to spur future studies, we have released both the code of our recommendation model and our image tweet dataset.
上下文感知图像Tweet建模和推荐
虽然已经在努力弥合图像理解中的语义差距,但对社交媒体图像的原位理解可以说更重要,但进展较少。在这项工作中,我们通过考虑图像tweet的社会背景来丰富图像的表示。我们认为,在微博语境中,传统的图像特征,如低水平SIFT或高水平检测对象,远远不足以解释图像推文中潜在的必要语义。为了弥补这一差距,我们从图像的像素转移到它们的上下文,并提出了一个上下文感知图像tweet建模(cite)框架,以挖掘和融合上下文文本来建模此类社交媒体图像的语义。我们从tweet的内在上下文开始,即1)图像本身的文本和2)伴随的文本;然后我们转向外部上下文:3)由tweet嵌入的URL链接到的外部网页,以及4)整个网络。可以利用这些上下文使许多基础应用程序受益。为了证明我们的框架的有效性,我们专注于个性化图像推特推荐的任务,开发了一个特征感知矩阵分解框架,该框架将上下文编码为用户兴趣建模的一部分。在大型Twitter数据集上的大量实验表明,我们提出的方法显着提高了性能。最后,为了促进未来的研究,我们发布了我们的推荐模型的代码和我们的图像tweet数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信