{"title":"基于人的影响为摄影图像生成摘要","authors":"Eun Yi Kim, Eunjeong Ko","doi":"10.1109/ICCI-CC.2015.7259411","DOIUrl":null,"url":null,"abstract":"The selection of canonical images that best represent a scene type is very important for efficiently visualizing search results and re-ranking them. The canonical images can be obtained using various aspects including viewpoint, visual features, and semantics. Here, we propose the selection of canonical images based on human affects. The proposed method is performed using three steps: extract the affective features from the input image, cluster images in the affective space and rank the clusters, and find representative images within each cluster. First, the probabilistic affective model is used to transform the images into the affective space. Thereafter, the images are clustered in the affective space. Then, the selected canonical images are representative and distinctive from each other. Thus, we define three prominent properties that an informative summary should satisfy: coverage, affective coherence, and distinctiveness. Based on these, cluster ranking is performed. Finally, the representative images for each cluster are selected, all of which are displayed as canonical images to the user. Experiments using web image databases demonstrate are not only representative but also exhibit a diverse set of views with minimal redundancy.","PeriodicalId":328695,"journal":{"name":"2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generating summaries for photographic images based on human affects\",\"authors\":\"Eun Yi Kim, Eunjeong Ko\",\"doi\":\"10.1109/ICCI-CC.2015.7259411\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The selection of canonical images that best represent a scene type is very important for efficiently visualizing search results and re-ranking them. The canonical images can be obtained using various aspects including viewpoint, visual features, and semantics. Here, we propose the selection of canonical images based on human affects. The proposed method is performed using three steps: extract the affective features from the input image, cluster images in the affective space and rank the clusters, and find representative images within each cluster. First, the probabilistic affective model is used to transform the images into the affective space. Thereafter, the images are clustered in the affective space. Then, the selected canonical images are representative and distinctive from each other. Thus, we define three prominent properties that an informative summary should satisfy: coverage, affective coherence, and distinctiveness. Based on these, cluster ranking is performed. Finally, the representative images for each cluster are selected, all of which are displayed as canonical images to the user. Experiments using web image databases demonstrate are not only representative but also exhibit a diverse set of views with minimal redundancy.\",\"PeriodicalId\":328695,\"journal\":{\"name\":\"2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)\",\"volume\":\"69 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCI-CC.2015.7259411\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCI-CC.2015.7259411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generating summaries for photographic images based on human affects
The selection of canonical images that best represent a scene type is very important for efficiently visualizing search results and re-ranking them. The canonical images can be obtained using various aspects including viewpoint, visual features, and semantics. Here, we propose the selection of canonical images based on human affects. The proposed method is performed using three steps: extract the affective features from the input image, cluster images in the affective space and rank the clusters, and find representative images within each cluster. First, the probabilistic affective model is used to transform the images into the affective space. Thereafter, the images are clustered in the affective space. Then, the selected canonical images are representative and distinctive from each other. Thus, we define three prominent properties that an informative summary should satisfy: coverage, affective coherence, and distinctiveness. Based on these, cluster ranking is performed. Finally, the representative images for each cluster are selected, all of which are displayed as canonical images to the user. Experiments using web image databases demonstrate are not only representative but also exhibit a diverse set of views with minimal redundancy.