2012 IEEE International Conference on Multimedia and Expo Workshops最新文献

筛选
英文 中文
Living the Past: Augmented Reality and Archeology 生活在过去:增强现实和考古学
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.67
Andrea Bernardini, C. Delogu, E. Pallotti, Luca Costantini
{"title":"Living the Past: Augmented Reality and Archeology","authors":"Andrea Bernardini, C. Delogu, E. Pallotti, Luca Costantini","doi":"10.1109/ICMEW.2012.67","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.67","url":null,"abstract":"Archeological remnants in urban areas tend to be included in the urban landscape or even remain hidden in subterranean locations which are not visible and, for these reasons, they are accessed with difficulty by visitors. In our previous experience, we developed a mobile application, which guided visitors in real time through various archaeological sites using texts, images, and videos. The results of an evaluation test which collected visitors' impressions and suggestions showed us that the mobile application allowed them to visit archeological remnants in a more participative way but that most visitors were unable to imagine what relation the archaeological remnants had with the ancient urban landscape. To solve this problem and improve the visitors' experience, we are now working at another application, which combines historical and archeological details with an immersive experience. The mobile application recognizes a cultural heritage element by image recognition or by positioning and it augments the interface with various layers of information. Furthermore, the application will provide not only information but it will offer to visitors an emotional experience.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123174379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Depth Map Super-Resolution Using Synthesized View Matching for Depth-Image-Based Rendering 基于深度图像渲染的合成视图匹配深度图超分辨率
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.111
Wei Hu, Gene Cheung, Xin Li, O. Au
{"title":"Depth Map Super-Resolution Using Synthesized View Matching for Depth-Image-Based Rendering","authors":"Wei Hu, Gene Cheung, Xin Li, O. Au","doi":"10.1109/ICMEW.2012.111","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.111","url":null,"abstract":"In texture-plus-depth format of 3D visual data, texture and depth maps of multiple viewpoints are coded and transmitted at sender. At receiver, decoded texture and depth maps of two neighboring viewpoints are used to synthesize a desired intermediate view via depth-image-based rendering (DIBR). In this paper, to enable transmission of depth maps at low resolution for bit saving, we propose a novel super-resolution (SR) algorithm to increase the resolution of the received depth map at decoder to match the corresponding received high resolution texture map for DIBR. Unlike previous depth map SR techniques that only utilize the texture map of the same view 0 to interpolate missing depth pixels of view 0, we use texture maps of the same and neighboring viewpoints, 0 and 1, so that the error between the original texture map of view 1 and the synthesized image of view 1 (interpolated using texture and depth maps of view 0) can be used as a regularization term during depth map SR of view 0. Further, piecewise smoothness of the reconstructed depth map is enforced by computing only the lowest frequency coefficients in Graph based Transform (GBT) domain for each interpolated block. Experimental results show that our SR scheme out-performed a previous scheme by up to 1.7dB in synthesized view quality in PSNR.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115282872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Statistical Color Model Based Adult Video Filter 基于统计色彩模型的成人视频滤镜
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.66
Liang Yin, Mingzhi Dong, Weihong Deng, Jun Guo, Bin Zhang
{"title":"Statistical Color Model Based Adult Video Filter","authors":"Liang Yin, Mingzhi Dong, Weihong Deng, Jun Guo, Bin Zhang","doi":"10.1109/ICMEW.2012.66","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.66","url":null,"abstract":"This paper, guided by Statistical Color Models, proposes a real-time Adult Video detector to filter the adult content in the video. A generic color model is constructed by statistical analysis of the sample images containing adult pixels. We fully utilize the video continuity characteristics, i.e. preceding and following N frames considered in the classification. Our method, through experimental, displays a satisfactory performance for detecting adult content. The reminder of the paper addresses the application of real-time adult video filter that blocks adult content from kids.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116748237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Inter Prediction Based on Low-rank Matrix Completion 基于低秩矩阵补全的内部预测
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.98
Yunhui Shi, He Li, Jin Wang, Wenpeng Ding, Baocai Yin
{"title":"Inter Prediction Based on Low-rank Matrix Completion","authors":"Yunhui Shi, He Li, Jin Wang, Wenpeng Ding, Baocai Yin","doi":"10.1109/ICMEW.2012.98","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.98","url":null,"abstract":"This paper proposes a new method of inter prediction based on low-rank matrix completion. By collection and rearrangement, image regions with high correlations can be used to generate a low-rank or approximately low-rank matrix. We view prediction values as the missing part in an incomplete low-rank matrix, and obtain the prediction by recovering the generated low-rank matrix. Taking advantage of exact recovery of incomplete matrix, the low-rank based prediction can exploit temporal correlation better. Our proposed prediction has the advantage of higher accuracy and less extra information, as the motion vector doesn't need to be encoded. Simulation results show that the bit-rate saving of the proposed scheme can reach up to 9.91% compared with H.264/AVC. Our scheme also outperforms the counterpart of the Template Matching Averaging (TMA) prediction by 8.06% at most.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122627128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Resource Allocation for Event Processing in Surveillance Systems 监视系统中事件处理的动态资源分配
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.74
D. Ahmed
{"title":"Dynamic Resource Allocation for Event Processing in Surveillance Systems","authors":"D. Ahmed","doi":"10.1109/ICMEW.2012.74","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.74","url":null,"abstract":"Allocating computing resources to different tasks of surveillance systems has always been a big challenge. The problem becomes complicated when it requires dealing with real-time computation and decision making as the system cannot afford of processing all sensory feeds and execute computationally expensive algorithms. In multi-modal surveillance systems, real-time event detection and understanding of a situation is crucial. So, the proper use of computing resources is necessary to control and manage an area of surveillance. This paper introduces a dynamic task scheduling technique considering available computing resources and real-time requirement according to the current surveillance context. The task scheduler determines the importance of each sensor with respect to its observation and surrounding context. The scheduler dynamically allocates CPU clock to data streams of each sensor so that it can minimize event detection time from the time of its occurrence. The simulation results reveal that the task scheduler can offer proper resource utilization which is valuable for surveillance systems.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122733632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimizing Video Retransmission Delay and Energy Consumption with Caching Routers 最小化视频重传延迟和缓存路由器的能耗
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.25
M. Mcgarry, Jesus Hernandez, R. Ferzli, V. Syrotiuk
{"title":"Minimizing Video Retransmission Delay and Energy Consumption with Caching Routers","authors":"M. Mcgarry, Jesus Hernandez, R. Ferzli, V. Syrotiuk","doi":"10.1109/ICMEW.2012.25","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.25","url":null,"abstract":"We investigated the use of caching of packets containing video at intermediary routers to reduce the delay and energy consumption of Automatic Repeat reQuest (ARQ) error recovery. We modeled the two mathematical programs that select the optimal set of routers to have caching ability, one to minimize energy consumption and the other to minimize retransmission delay. Both of these mathematical programs have identical structure. We then solve these mathematical programs with a dynamic programming solution whose execution time growth is polynomial in the size of the input parameters. Our performance analysis indicates that the optimal solution significantly outperforms several heuristic solutions.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116219501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A New Texture Feature for Improved Food Recognition Accuracy in a Mobile Phone Based Dietary Assessment System 一种新的纹理特征提高了基于手机的膳食评估系统的食物识别精度
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.79
M. Rahman, M. Pickering, D. Kerr, C. Boushey, E. Delp
{"title":"A New Texture Feature for Improved Food Recognition Accuracy in a Mobile Phone Based Dietary Assessment System","authors":"M. Rahman, M. Pickering, D. Kerr, C. Boushey, E. Delp","doi":"10.1109/ICMEW.2012.79","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.79","url":null,"abstract":"Poor diet is one of the key determinants of an individual's risk of developing chronic diseases. Assessing what people eat is fundamental to establishing the link between diet and disease. Food records are considered the best approach for assessing energy intake however paper-based food recording is cumbersome and often inaccurate. Researchers have begun to explore how mobile devices can be used to reduce the burden of recording nutritional intake. The integrated camera in a mobile phone can be used for capturing images of food consumed. These images are then processed to automatically identify the food items for record keeping purposes. In such systems, the accurate classification of food items in these images is vital to the success of such a system. In this paper we will present a new method for generating texture features from food images and demonstrate that this new feature provides greater food classification accuracy for a mobile phone based dietary assessment system.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125033846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A Dense 3D Reconstruction Approach from Uncalibrated Video Sequences 从未校准的视频序列密集三维重建方法
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.108
L. Ling, I. Burnett, E. Cheng
{"title":"A Dense 3D Reconstruction Approach from Uncalibrated Video Sequences","authors":"L. Ling, I. Burnett, E. Cheng","doi":"10.1109/ICMEW.2012.108","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.108","url":null,"abstract":"Current approaches for 3D reconstruction from feature points of images are classed as sparse and dense techniques. However, the sparse approaches are insufficient for surface reconstruction since only sparsely distributed feature points are presented. Further, existing dense reconstruction approaches require pre-calibrated camera orientation, which limits the applicability and flexibility. This paper proposes a one-stop 3D reconstruction solution that reconstructs a highly dense surface from an uncalibrated video sequence, the camera orientations and surface reconstruction are simultaneously computed from new dense point features using an approach motivated by Structure from Motion (SfM) techniques. Further, this paper presents a flexible automatic method with the simple interface of 'videos to 3D model'. These improvements are essential to practical applications in 3D modeling and visualization. The reliability of the proposed algorithm has been tested on various data sets and the accuracy and performance are compared with both sparse and dense reconstruction benchmark algorithms.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122951378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Creative Transformations of Personal Photographs 个人照片的创造性转变
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.87
Yi Wu, K. Seshadrinathan, Wei Sun, M. E. Choubassi, J. Ratcliff, I. Kozintsev
{"title":"Creative Transformations of Personal Photographs","authors":"Yi Wu, K. Seshadrinathan, Wei Sun, M. E. Choubassi, J. Ratcliff, I. Kozintsev","doi":"10.1109/ICMEW.2012.87","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.87","url":null,"abstract":"The popularity of mobile photography paves the way to create new ways of viewing, interacting and enabling a user's creative expression with personal media. In this paper, we describe an instantaneous and automatic method to localize the camera and enable segmentation of foreground objects such as people from an input image, assuming knowledge of the environment in which the image was taken. Camera localization is performed by comparing multiple views of the 3D environment against the uncalibrated input image. Following localization, selected views of the 3D environment are aligned, color-mapped and compared against the input image to segment the foreground content. We demonstrate results using our proposed system in two illustrative applications: a virtual game played between multiple users involving virtual projectiles and a group shot of multiple people who may not be available simultaneously at the same time or place created against a background of their choice.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121846949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual interactions: Can EEG Help Make the Difference with Real Interaction? 虚拟交互:脑电图能帮助实现与真实交互的不同吗?
2012 IEEE International Conference on Multimedia and Expo Workshops Pub Date : 2012-07-09 DOI: 10.1109/ICMEW.2012.33
J. Rzepecki, Jonathan Delcourt, Matthieu Perreira Da Silva, P. Callet
{"title":"Virtual interactions: Can EEG Help Make the Difference with Real Interaction?","authors":"J. Rzepecki, Jonathan Delcourt, Matthieu Perreira Da Silva, P. Callet","doi":"10.1109/ICMEW.2012.33","DOIUrl":"https://doi.org/10.1109/ICMEW.2012.33","url":null,"abstract":"Science and technology progress fast, but mouse and keyboard are still used to control multimedia devices. One of the limiting factors of gesture based HCIs adoption is the detection of the user's intention to interact. This study tries to make a step in that direction with use of consumer EEG sensor headset. EEG headset records in real-time data that can help to identify intention of the user based on his emotional state. For each subject EEG responses for different stimuli are recorded. Acquiring these data allows to determine the potential of EEG based intention detection. The findings are promising and with proper implementation should allow to building a new type of HCI devices.","PeriodicalId":385797,"journal":{"name":"2012 IEEE International Conference on Multimedia and Expo Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122795308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信