{"title":"图像情感计算","authors":"Sicheng Zhao","doi":"10.1145/2964284.2971473","DOIUrl":null,"url":null,"abstract":"Images can convey rich semantics and induce strong emotions in viewers. My research aims to predict image emotions from different aspects with respect to two main challenges: affective gap and subjective evaluation. To bridge the affective gap, we extract emotion features based on principles-of-art to recognize image-centric dominant emotions. As the emotions that are induced in viewers by an image are highly subjective and different, we propose to predict user-centric personalized emotion perceptions for each viewer and image-centric emotion probability distribution for each image. To tackle the subjective evaluation issue, we set up a large scale image emotion dataset from Flickr, named Image-Emotion-Social-Net, on both dimensional and categorical emotion representations with over 1 million images and about 8,000 users. Different types of factors may influence personalized image emotion perceptions, including visual content, social context, temporal evolution and location influence. We make an initial attempt to jointly combine them by the proposed rolling multi-task hypergraph learning. Both discrete and continuous emotion distributions are modelled via shared sparse learning. Further, several potential applications based on image emotions are designed and implemented.","PeriodicalId":140670,"journal":{"name":"Proceedings of the 24th ACM international conference on Multimedia","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Image Emotion Computing\",\"authors\":\"Sicheng Zhao\",\"doi\":\"10.1145/2964284.2971473\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Images can convey rich semantics and induce strong emotions in viewers. My research aims to predict image emotions from different aspects with respect to two main challenges: affective gap and subjective evaluation. To bridge the affective gap, we extract emotion features based on principles-of-art to recognize image-centric dominant emotions. As the emotions that are induced in viewers by an image are highly subjective and different, we propose to predict user-centric personalized emotion perceptions for each viewer and image-centric emotion probability distribution for each image. To tackle the subjective evaluation issue, we set up a large scale image emotion dataset from Flickr, named Image-Emotion-Social-Net, on both dimensional and categorical emotion representations with over 1 million images and about 8,000 users. Different types of factors may influence personalized image emotion perceptions, including visual content, social context, temporal evolution and location influence. We make an initial attempt to jointly combine them by the proposed rolling multi-task hypergraph learning. Both discrete and continuous emotion distributions are modelled via shared sparse learning. Further, several potential applications based on image emotions are designed and implemented.\",\"PeriodicalId\":140670,\"journal\":{\"name\":\"Proceedings of the 24th ACM international conference on Multimedia\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 24th ACM international conference on Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2964284.2971473\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 24th ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2964284.2971473","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
摘要
图像可以传达丰富的语义,引起观众强烈的情感。我的研究旨在从不同方面预测形象情绪,主要涉及两个挑战:情感差距和主观评价。为了弥合情感差距,我们基于艺术原理提取情感特征,以识别以图像为中心的主导情感。由于图像在观众中引起的情绪是高度主观和不同的,我们建议预测每个观众以用户为中心的个性化情绪感知,以及每个图像以图像为中心的情绪概率分布。为了解决主观评价问题,我们建立了一个来自Flickr的大规模图像情感数据集,名为image - emotion - social - net,包含超过100万张图像和大约8000名用户的维度和分类情感表示。不同类型的因素可能影响个性化的图像情感感知,包括视觉内容、社会背景、时间演变和位置影响。我们通过提出滚动多任务超图学习,初步尝试将两者结合起来。离散和连续的情绪分布通过共享稀疏学习建模。此外,设计并实现了基于图像情感的几种潜在应用。
Images can convey rich semantics and induce strong emotions in viewers. My research aims to predict image emotions from different aspects with respect to two main challenges: affective gap and subjective evaluation. To bridge the affective gap, we extract emotion features based on principles-of-art to recognize image-centric dominant emotions. As the emotions that are induced in viewers by an image are highly subjective and different, we propose to predict user-centric personalized emotion perceptions for each viewer and image-centric emotion probability distribution for each image. To tackle the subjective evaluation issue, we set up a large scale image emotion dataset from Flickr, named Image-Emotion-Social-Net, on both dimensional and categorical emotion representations with over 1 million images and about 8,000 users. Different types of factors may influence personalized image emotion perceptions, including visual content, social context, temporal evolution and location influence. We make an initial attempt to jointly combine them by the proposed rolling multi-task hypergraph learning. Both discrete and continuous emotion distributions are modelled via shared sparse learning. Further, several potential applications based on image emotions are designed and implemented.