2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)最新文献

筛选
英文 中文
4K real time video streaming with SHVC decoder and GPAC player 4K实时视频流与SHVC解码器和GPAC播放器
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890613
W. Hamidouche, Gildas Cocherel, J. L. Feuvre, M. Raulet, O. Déforges
{"title":"4K real time video streaming with SHVC decoder and GPAC player","authors":"W. Hamidouche, Gildas Cocherel, J. L. Feuvre, M. Raulet, O. Déforges","doi":"10.1109/ICMEW.2014.6890613","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890613","url":null,"abstract":"This paper presents the first 4Kp30 end-to-end video streaming demonstration based on the upcoming Scalable High efficiency Video Coding (SHVC) standard. The optimized and parallel SHVC decoder is used under the GPAC player to decode and display in real time the received SHVC layers. The SHVC reference software model (SHM) is used to encode the 4K original video in two spatial scalability layers: the base layer at 1080p resolution and the enhancement layer at 2160p resolution. The SHVC bitstream is encapsulated with the GPAC multimedia library into MP4 file format. The GPAC player at the server side broadcasts the MP4 content in MPEG-2 TS. At the client side, the GPAC player receives the SHVC video packets which are decoded by the SHVC decoder and then rendered in real time by the player. The GPAC player provides an interactive interface enabling to switch between displaying the base and the enhancement layers.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131809245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Enhancing the detection of concepts for visual lifelogs using contexts instead of ontologies 使用上下文而不是本体增强视觉生命日志的概念检测
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890570
Peng Wang, A. Smeaton, Yuchao Zhang, Bo Deng
{"title":"Enhancing the detection of concepts for visual lifelogs using contexts instead of ontologies","authors":"Peng Wang, A. Smeaton, Yuchao Zhang, Bo Deng","doi":"10.1109/ICMEW.2014.6890570","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890570","url":null,"abstract":"Automatic detection of semantic concepts in visual media is typically achieved by an automatic mapping from low-level features to higher level semantics and progress in automatic detection within narrow domains has now reached a satisfactory performance level. In visual lifelogging, part of the quantified-self movement, wearable cameras can automatically record most aspects of daily living. The resulting images have a diversity of everyday concepts which severely degrades the performance of concept detection. In this paper, we present an algorithm based on non-negative matrix refactorization which exploits inherent relationships between everyday concepts in domains where context is more prevalent, such as lifelogging. Results for initial concept detection are factorized and adjusted according to their patterns of appearance, and absence. In comparison to using an ontology to enhance concept detection, we use underlying contextual semantics to improve overall detection performance. Results are demonstrated in experiments to show the efficacy of our algorithm.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132606649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Error resilient dual frame motion compensation for mobile communication 移动通信的容错双帧运动补偿
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890532
Liu Da, Xu Long, Zhang Peng, Xiaobin Zhu
{"title":"Error resilient dual frame motion compensation for mobile communication","authors":"Liu Da, Xu Long, Zhang Peng, Xiaobin Zhu","doi":"10.1109/ICMEW.2014.6890532","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890532","url":null,"abstract":"In dual frame motion compensation (DFMC), one short-term reference frame and one long-term reference frame are utilized for motion compensation. This scheme can be utilized to do error resilience, and there have been a number of DFMC based error resilience in the literature. In the paper, an error resilient jump update DFMC (JU-DFMC) with adaptive unequal error protection is proposed. In the proposed error resilient JU-DFMC, a new error resilient prediction structure is firstly presented, in which for different packet loss rates, the reference frames are adaptively adjusted to reduce error propagation. Then an end-to-end distortion model considering data partition is derived and applied to macroblock (MB) level mode decision. Finally an adaptive unequal error protection is proposed, in which a frame level rate distortion cost scheme is used to determine how many times of the header information will be transmitted in a long-term reference frame. Experimental results show that the proposed method can achieve better performance than previous schemes.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133183238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing “voice care”: Real-time methods for event recognition and localization based on acoustic cues 发展“语音护理”:基于声音线索的事件识别和定位的实时方法
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890676
Yi-Wen Liu, Hang-Ming Liang, Shung-You Lao, Che-Wei Wu, Hung-Kuang Hao, Fan-Jie Kung, Yu-Tse Ho, Pei-Yi Lee, S. Kang
{"title":"Developing “voice care”: Real-time methods for event recognition and localization based on acoustic cues","authors":"Yi-Wen Liu, Hang-Ming Liang, Shung-You Lao, Che-Wei Wu, Hung-Kuang Hao, Fan-Jie Kung, Yu-Tse Ho, Pei-Yi Lee, S. Kang","doi":"10.1109/ICMEW.2014.6890676","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890676","url":null,"abstract":"This paper presents methods for sound recognition in a living space and ways to track the location of the sound sources. Algorithms were developed so sound recognition and localization can both be performed in real time. The sound recognition method is based on Gaussian mixture modeling with outlier rejection. The sound source localization method is based on multiple signal classification (MUSIC) and it borrows the idea of particle filtering to confine the estimation error. Estimates of the sound source location can be successively refined by Kalman filtering. The recognition method was tested with real recordings and achieved > 90% of accuracy in distinguishing 8 classes of sounds while keeping both the false-acceptance and the false-rejection rates below 20%. The localization method was tested in real time and demonstrated the capabilities to track a sound source moving at about 0.3 m/s. These results indicate that the methods, when integrated, can be deployed to the home for acoustic event detection purposes.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117235893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Removing rain and snow in a single image using saturation and visibility features 使用饱和度和可见度特征去除单个图像中的雨和雪
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890551
S. Pei, Yu-Tai Tsai, Chen-Yu Lee
{"title":"Removing rain and snow in a single image using saturation and visibility features","authors":"S. Pei, Yu-Tai Tsai, Chen-Yu Lee","doi":"10.1109/ICMEW.2014.6890551","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890551","url":null,"abstract":"Rain and snow are two of the major obstacles in processing the photograph captured in the outdoor bad weather conditions. Rain and snow will cause the performance of vision algorithm become worse. There are a lot of methods have been proposed to reduce the raindrops and snowflakes in video. However, how to remove rain and snow and keep the detail of the background in a single image is still quite challenging since it is hard to detect the pixels of rain and snow. In this paper, we proposed a new method based on features on saturation and visibility. The results show that it can achieve better performance than previous methods.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116196030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
SAGTA: Semi-automatic Ground Truth Annotation in crowd scenes SAGTA:人群场景中的半自动地面真相标注
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890539
Shuang Wu, Shibao Zheng, Hua Yang, Yawen Fan, Longfei Liang, Hang Su
{"title":"SAGTA: Semi-automatic Ground Truth Annotation in crowd scenes","authors":"Shuang Wu, Shibao Zheng, Hua Yang, Yawen Fan, Longfei Liang, Hang Su","doi":"10.1109/ICMEW.2014.6890539","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890539","url":null,"abstract":"Ground truth is crucial in the performance evaluation of algorithms. Nevertheless, it is a tedious and time-consuming task to annotate ground truth manually, especially in crowd scenes. In this paper, we propose a novel semi-automatic tool called SAGTA (Semi-automatic Ground Truth Annotation Tool), which can assist researchers to annotate pedestrians easily and quickly in crowd scenes. Firstly, users label pedestrians manually in a few key frames by drawing bounding boxes through the friendly GUI of SAGTA. Then, the annotations in the rest frames are coarsely estimated by automatically interpolating based on 3D linear motion assumption. Moreover, our tool refines the estimated annotations through using ORB feature matching. This coarse-to-fine method facilitates the annotation process efficiently. Afterwards, the refined annotations are manually verified and corrected to guarantee the accuracy of annotations. In addition, some extra information (such as density, trajectory and occlusion relationships) can be inferred automatically and visualized vividly. The proposed tool has been tested on PETS and real surveillance data sets. Experimental results demonstrate that SAGTA achieves superior performance in time cost than ViPER-GT, which is the widely used annotation tool.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122029638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Massivizing online games using cloud computing: A vision 使用云计算的大规模在线游戏:一种愿景
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890684
A. Iosup, S. Shen, Yong Guo, Stefan Hugtenburg, Jesse Donkervliet, R. Prodan
{"title":"Massivizing online games using cloud computing: A vision","authors":"A. Iosup, S. Shen, Yong Guo, Stefan Hugtenburg, Jesse Donkervliet, R. Prodan","doi":"10.1109/ICMEW.2014.6890684","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890684","url":null,"abstract":"Online gaming, a large market with hundreds of millions of active players, is still struggling to scale without risky investments in infrastructure. In this work, we propose a cloudbased platform to massivize online gaming-the challenges and opportunities of scaling on-demand, while paying only for what is used. We discuss the major aspects of cloud-based gaming, virtual-world management, game-data processing, and game-content generation.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"390 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122078776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Collaborative recommendation of ambient media services 协同推荐环境媒体服务
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890715
M. A. Hossain, Atif Alamri, Mohammed F. Alhamid, Majdi Rawashdeh, Awny Alnusair
{"title":"Collaborative recommendation of ambient media services","authors":"M. A. Hossain, Atif Alamri, Mohammed F. Alhamid, Majdi Rawashdeh, Awny Alnusair","doi":"10.1109/ICMEW.2014.6890715","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890715","url":null,"abstract":"Ambient intelligence environments are technologically augmented surroundings that aim to provide personalized services to the users based on their context. Identifying these services for the users has become an increasingly challenging task. The overwhelming number of services in the ambient environment has made the selection and management of services even more challenging. To address this problem, researchers have proposed several techniques, such as creating a user model and selecting services based on that model; applying rule-based approach to match the relevant services; utilizing a combination of user's profile, context, interaction history and service reputation to select the best services for the user, and so on. Most of these techniques obtain the preference of a user based on his/her own interaction and profile and do not consider the power of collaborative selection approach. In this paper, we propose to use the collaborative recommendation technique to select services for a user based on multiple users' interactions and profile. Accordingly, we demonstrate the potential of the proposed approach through preliminary experiment.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122124911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Concurrent image query using local random walk with restart on large scale graphs 在大规模图上使用局部随机漫步和重新启动的并发图像查询
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890589
Yinglong Xia, Jui-Hsin Lai, Lifeng Nai, Ching-Yung Lin
{"title":"Concurrent image query using local random walk with restart on large scale graphs","authors":"Yinglong Xia, Jui-Hsin Lai, Lifeng Nai, Ching-Yung Lin","doi":"10.1109/ICMEW.2014.6890589","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890589","url":null,"abstract":"Efficient image query is a fundamental challenge in many large scale multimedia applications, especially when handling many queries concurrently. In this paper, we proposed a novel approach called graph local random walk for high performance concurrent image query. Specifically, we organize the massive images set into a large scale graph using graph database, according to the similarity between images. A heuristic method is utilized to map each query image to some vertex in the graph, followed by a local search to refine the query results using an alternative of local random walk on graph. The local random walk process is essentially a weighted partial traversal in the local subgraphs for finding a better match of the query images. We organize the graph of the image set in a parallelization amenable approach, so that a set of partial graph traversal for local random walk can be performed concurrently, taking the advantage of the multithreading capability of processors. We implemented the proposed method in state-of-the-art multicore platforms. The experimental result shows that the graph local random walk based approach outperforms baseline methods in terms of both throughput and scalability.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124123678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Hybrid multi-resolution analysis and weighted averaging of overcomplete orthogonal transform scheme for digital image denoising 数字图像去噪的过完备正交变换混合多分辨率分析与加权平均
2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW) Pub Date : 2014-07-14 DOI: 10.1109/ICMEW.2014.6890575
Jingming Xu, Fuhuei Lin
{"title":"Hybrid multi-resolution analysis and weighted averaging of overcomplete orthogonal transform scheme for digital image denoising","authors":"Jingming Xu, Fuhuei Lin","doi":"10.1109/ICMEW.2014.6890575","DOIUrl":"https://doi.org/10.1109/ICMEW.2014.6890575","url":null,"abstract":"Noise reduction is a crucial research topic of digital image quality enhancement in both theoretical and applied perspectives, and attracts extensive research efforts in decades. Weighted averaging with overcomplete orthogonal transform (WAOOT) has shown its ability to effectively remove i.i.d. noise, while maintaining edge sharpness. In this paper, the weights of the overcomplete transform set are showed to be dependent on the noise covariance matrix for non-i.i.d. noise in real digital images, a Gaussian covariance model is proposed to describe the noise correlations, and accordingly a Gaussian pyramidal multi-resolution analysis architecture is built to decorrelate the non-i.i.d. noise and reduce it by utilizing WAOOT algorithm at each layer. The simplified solution of WAOOT algorithm is further refined by global hard threshold adaption iterations and transform block size discrimination on edge pixels. Simulation results show that the proposed scheme achieves substantial improvements in both objective and subjective denoised image quality over state-of-the-art algorithms.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124415493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信