2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)最新文献

筛选
英文 中文
Uncrowded window inspired information security display 不拥挤的窗口启发了信息安全显示
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-09-08 DOI: 10.1109/ICMEW.2014.6890623
Zhongpai Gao, Guangtao Zhai, Xiongkuo Min, Chunjia Hu
{"title":"Uncrowded window inspired information security display","authors":"Zhongpai Gao, Guangtao Zhai, Xiongkuo Min, Chunjia Hu","doi":"10.1109/ICMEW.2014.6890623","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890623","url":null,"abstract":"With the booming of visual media, people pay more and more attention to privacy protection in public environments. Most existing research on information security such as cryptography and steganography is mainly concerned about transmission and yet little has been done to prevent the information displayed on screens from reaching eyes of the bystanders. To deal with the problem, for the application of text-reading, we proposed an eye tracking based solution using the newly revealed concept of uncrowded window from vision research. The theory of uncrowded window suggests that human vision can only effectively recognize objects inside a small window. Object features outside the window may still be detectable but the feature detection results cannot be efficiently combined properly and therefore those objects will not be recognizable. We use eye-tracker to locate fixation points of the authorized reader in real time, and only the area inside the uncrowded window displays the private information we want to protect. A number of dummy windows with fake messages are displayed around the real uncrowded window as diversions. And without the precise knowledge about the fixations of the authorized reader, the chance for bystanders to capture the private message from those surrounding area and the dummy windows is very low. Meanwhile, since the authorized reader can only read within the uncrowded window, detrimental impact of those dummy windows is almost negligible. The proposed prototype system was written in C++ with SDKs of Direct3D, Tobii Gaze SDK, CEGUI, MuPDF, OpenCV and etc. Extended demonstration of the system will be provided to show that the proposed method is an effective solution to the problem of information security and display.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128340970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Discriminant Hyper-Laplacian projections with its application to face recognition 判别超拉普拉斯投影及其在人脸识别中的应用
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-09-08 DOI: 10.1109/ICMEW.2014.6890566
Sheng Huang, Dan Yang, Yongxin Ge, Dengyang Zhao, Xin Feng
{"title":"Discriminant Hyper-Laplacian projections with its application to face recognition","authors":"Sheng Huang, Dan Yang, Yongxin Ge, Dengyang Zhao, Xin Feng","doi":"10.1109/ICMEW.2014.6890566","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890566","url":null,"abstract":"Discriminant Locality Preserving Projections (DLPP) is one of the most influential supervised subspace learning algorithms that considers both discriminative and geometric (manifold) information. There is an obvious drawback of DLPP that it only considers the pairwise geometric relationship of samples. However, in many real-world issues, relationships among the samples are often more complex than pairwise. Naively squeezing the complex into pairwise ones will inevitably lead to loss of some information, which are crucial for classification and clustering. We address this issue via using the Hyper-Laplacian instead of the regular Laplacian in DLPP, which only can depict the pairwise relationship. This new DLPP algorithm is exactly a generalization of DLPP and we name it Discriminant Hyper-Laplacian Projection (DHLP). Five popular face databases are adopted for validating our work. The results demonstrate the superiority of DHLP over DLPP, particularly in face recognition in the wild.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128715884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Sample edge offset compensation for HEVC based 3D Video Coding 基于HEVC的3D视频编码的样本边缘偏移补偿
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-09-08 DOI: 10.1109/ICMEW.2014.6890587
Qin Yu, Ying Chen, Li Zhang, Siwei Ma
{"title":"Sample edge offset compensation for HEVC based 3D Video Coding","authors":"Qin Yu, Ying Chen, Li Zhang, Siwei Ma","doi":"10.1109/ICMEW.2014.6890587","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890587","url":null,"abstract":"In this paper, an in-loop sample edge offset compensation (SEOC) framework is proposed for High Efficiency Video Coding (HEVC) based 3D video (3D-HEVC) coding. The framework targets at improving the reconstruction quality of the depth images, especially in edge areas. In a typical 3DV system, depth images are used for synthesizing the virtual views, therefore preserving high quality depth images, especially the edge information is important. However, the ringing artifacts may be introduced at the depth edge areas due to the compression distortion of the depth images. The SEOC framework resolves this problem, after reconstruction, by identifying each edge pixel and enhancing it of the full depth image with values coded in the bitstream. Experimental results demonstrate that, compared with the original 3D-HEVC, the proposed algorithm can achieve about 6% bitrate saving on average.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124453780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fisher discriminant framework based on Kernel Entropy Component Analysis for feature extraction and emotion recognition 基于核熵分量分析的fisher判别框架用于特征提取和情感识别
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-09-08 DOI: 10.1109/ICMEW.2014.6890577
Lei Gao, L. Qi, E. Chen, L. Guan
{"title":"A fisher discriminant framework based on Kernel Entropy Component Analysis for feature extraction and emotion recognition","authors":"Lei Gao, L. Qi, E. Chen, L. Guan","doi":"10.1109/ICMEW.2014.6890577","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890577","url":null,"abstract":"This paper aims at providing a general method for feature extraction and recognition. The most essential issues for pattern recognition include extracting discriminant features and improving recognition accuracy. Kernel Entropy Component Analysis (KECA), as a new method for data transformation and dimensionality reduction, has attracted more attentions. However, as KECA only reveals structure relating to the Renyi entropy of the input space data set, it cannot extract effectively discriminant classification information for recognition. In this paper, we propose combining KECA and Fisher's linear discriminant analysis (LDA), utilizing descriptor of information entropy and scatter information of classes to improve recognition performance. The proposed method is applied to speech-based emotion recognition, and evaluated though experiments on RML audiovisual emotion databases. The results clear demonstrate the effectiveness of the proposed solution.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117188149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
User-centered design approach to promoting multimedia university degree program 以用户为中心的设计方法促进多媒体大学学位课程
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-09-08 DOI: 10.1109/ICMEW.2014.6890679
Emilija Stojmenova Duh, Argene Superina, Jože Guna, A. Kos, J. Bester, M. Pogacnik
{"title":"User-centered design approach to promoting multimedia university degree program","authors":"Emilija Stojmenova Duh, Argene Superina, Jože Guna, A. Kos, J. Bester, M. Pogacnik","doi":"10.1109/ICMEW.2014.6890679","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890679","url":null,"abstract":"This article presents the initiatives and user-centered design activities undertaken by the Faculty of electrical engineering and Faculty of computer science within the largest Slovenian university, the University of Ljubljana to increase the interest in technical studies and especially in the newly introduced interdisciplinary university study program Multimedia. A special attention was devoted to lowering the gender imbalance in engineering.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117250722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content-based social image retrieval with context regularization 基于内容的社会图像检索与上下文正则化
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-09-08 DOI: 10.1109/ICMEW.2014.6890601
Leiquan Wang, Zhicheng Zhao, Fei Su, Weichen Sun
{"title":"Content-based social image retrieval with context regularization","authors":"Leiquan Wang, Zhicheng Zhao, Fei Su, Weichen Sun","doi":"10.1109/ICMEW.2014.6890601","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890601","url":null,"abstract":"The retrieval and recommendation of social media have provided an immense opportunity to exploit the collective behavior of community users through linked multi-modal data, such as images and tags, where tags provide context information, and images represent visual content. The stability of content information is more reliable than user contributed context information, which was ignored by many existing methods. In this paper, through discovering the latent feature space between visual features and context, we propose a novel approach for social image retrieval by imposing context regularization terms to constraint visual features. The method can effectively reflect the interior visual structure for social image representation. Experimental results on the NUS-WIDEOBJECT dataset demonstrate that the proposed approach obtains competitive performance compared with state-of-the-art methods.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122032555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi-directional skip and direct modes design in bi-predictive slices for AVS2 standard AVS2标准双预测切片的多向跳过和直接模式设计
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-09-08 DOI: 10.1109/ICMEW.2014.6890689
Zhenjiang Shao, Lu Yu
{"title":"Multi-directional skip and direct modes design in bi-predictive slices for AVS2 standard","authors":"Zhenjiang Shao, Lu Yu","doi":"10.1109/ICMEW.2014.6890689","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890689","url":null,"abstract":"The second generation of Audio-video coding standard (AVS2) part 2 has obtained an outstanding compressing performance by a series of advanced technologies, e.g. flexible partition structure and high efficient prediction methods, etc. However, it is observed that there is still substantial motion information redundancy between spatial adjacent blocks. Therefore, this paper introduces a technique named Multi-directional SKIP and DIRECT modes (MDSD) to ameliorate SKIP/DIRECT mode to adapt to the features of block motion. Based on the study of the correlation between spatial adjacent blocks with the same or different motion models, a priority-based motion information derivation method is designed. A higher priority is assigned to the motion information of neighboring blocks with the same motion model as current block. Experiments in AVS2 reference software RD3.0 show improvements of up to 3.9% in BD rate reduction are achieved.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132612032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
QOE evaluation of video services considering users' behavior
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890558
Jiarun Song, Fuzheng Yang, Shuai Wan
{"title":"QOE evaluation of video services considering users' behavior","authors":"Jiarun Song, Fuzheng Yang, Shuai Wan","doi":"10.1109/ICMEW.2014.6890558","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890558","url":null,"abstract":"Various factor in the communication ecosystem are acting together on the final experience of the users who use the related video services. In this paper, the joint effect in both the technology and the human domain is considered and incorporated to evaluate the users' quality of experience (QoE) with respect to the IPTV service. Here the parameters of the video quality, the watching duration, the duration of fast forward, and the times of fast forward, etc, are collected and combined to establish the objective model for accurate QoE prediction. Experimental results show that the proposed model can well estimate the quality of users' experience.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126032452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Learning visual saliency for stereoscopic images 学习立体图像的视觉显著性
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890709
Yuming Fang, Weisi Lin, Zhijun Fang, Jianjun Lei, P. Callet, Feiniu Yuan
{"title":"Learning visual saliency for stereoscopic images","authors":"Yuming Fang, Weisi Lin, Zhijun Fang, Jianjun Lei, P. Callet, Feiniu Yuan","doi":"10.1109/ICMEW.2014.6890709","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890709","url":null,"abstract":"Currently, there are various saliency detection models proposed for saliency prediction in 2D images/video in the previous decades. With the rapid development of stereoscopic display techniques, stereoscopic saliency detection is much desired for the emerging stereoscopic applications. Compared with 2D saliency detection, the depth factor has to be considered in stereoscopic saliency detection. Inspired by the wide applications of machine learning techniques in 2D saliency detection, we propose to use the machine learning technique for stereoscopic saliency detection in this paper. The contrast features from color, luminance and texture in 2D images are adopted in the proposed framework. For the depth factor, we consider both the depth contrast and depth degree in the proposed learned model. Additionally, the center-bias factor is also used as an input feature for learning the model. Experimental results on a recent large-scale eye tracking database show the better performance of the proposed model over other existing ones.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126081631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An efficient coding method for coding Region-of-Interest locations in AVS2 AVS2中感兴趣区域位置的有效编码方法
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890688
Mingliang Chen, Weiyao Lin, Xiaozhen Zheng
{"title":"An efficient coding method for coding Region-of-Interest locations in AVS2","authors":"Mingliang Chen, Weiyao Lin, Xiaozhen Zheng","doi":"10.1109/ICMEW.2014.6890688","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890688","url":null,"abstract":"Region-of-Interest (ROI) location information in videos has many practical usages in video coding field, such as video content analysis and user experience improvement. Although ROI-based coding has been studied widely by many researchers to improve coding efficiency for video contents, the ROI location information itself is seldom coded in video bitstream. In this paper, we will introduce our proposed ROI location coding tool which has been adopted in surveillance profile of AVS2 video coding standard (surveillance profile). Our tool includes three schemes: direct-coding scheme, differential- coding scheme, and reconstructed-coding scheme. We will illustrate the details of these schemes, and perform analysis of their advantages and disadvantages, respectively.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122760057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信