{"title":"Data set for fall events and daily activities from inertial sensors","authors":"O. Ojetola, E. Gaura, J. Brusey","doi":"10.1145/2713168.2713198","DOIUrl":"https://doi.org/10.1145/2713168.2713198","url":null,"abstract":"Wearable sensors are becoming popular for remote health monitoring as technology improves and cost reduces. One area in which wearable sensors are increasingly being used is falls monitoring. The elderly, in particular are vulnerable to falls and require continuous monitoring. Indeed, many attempts, with insufficient success have been made towards accurate, robust and generic falls and Activities of Daily Living (ADL) classification. A major challenge in developing solutions for fall detection is access to sufficiently large data sets. This paper presents a description of the data set and the experimental protocols designed by the authors for the simulation of falls, near-falls and ADL. Forty-two volunteers were recruited to participate in an experiment that involved a set of scripted protocols. Four types of falls (forward, backward, lateral left and right) and several ADL were simulated. This data set is intended for the evaluation of fall detection algorithms by combining daily activities and transitions from one posture to another with falls. In our prior work, machine learning based fall detection algorithms were developed and evaluated. Results showed that our algorithm was able to discriminate between falls and ADL with an F-measure of 94%.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130071301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Merge and forward: self-organized inter-destination multimedia synchronization","authors":"Benjamin Rainer, Stefan Petscharnig, C. Timmerer","doi":"10.1145/2713168.2713185","DOIUrl":"https://doi.org/10.1145/2713168.2713185","url":null,"abstract":"Social networks have become ubiquitous and with these new possible ways for social communication and experiencing multimedia together the traditional TV scenario drifts more and more towards a distributed social experience. Asynchronism in the multimedia playback of the users may have a significant impact on the acceptability of systems providing the distributed multimedia experience. The synchronization needed in such systems is called Inter-Destination Multimedia Synchronization. In this paper we propose a demo that implements IDMS by the means of our self-organized and distributed approach assisted by pull-based streaming. We also provide a video of the planned demonstration and provide the mobile application as open source licensed under the GNU LGPL.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131180160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Venkatraman, Yuan Tian, S. Raghuraman, B. Prabhakaran, Nhut Nguyen
{"title":"MMT+AVR: enabling collaboration in augmented virtuality/reality using ISO's MPEG media transport","authors":"K. Venkatraman, Yuan Tian, S. Raghuraman, B. Prabhakaran, Nhut Nguyen","doi":"10.1145/2713168.2713170","DOIUrl":"https://doi.org/10.1145/2713168.2713170","url":null,"abstract":"Augmented Reality (AR) and Augmented Virtuality (AV) systems have been used in various fields such as entertainment, broadcasting, gaming [1], etc. Collaborative AR or AV (CAR/CAV) systems are a special kind of such system in which the interaction happens through the exchange of multi-modal data between multiple users/sites. Multiple sensors capture the real objects and enable interaction with shared virtual objects in a customizable virtual environment. Haptic devices can be added to introduce force feedback when the virtual objects are manipulated. These applications are demanding in terms of network resources to support low latency media delivery and media source switching similar to broadcast applications. Enabling real time interaction with multiple modalities with high volume data requires an advanced media transport protocol that supports low latency media delivery and fast media source (channel) switching. To enable such collaboration over a stochastic network like the Internet requires a combination of technologies from data design, synchronization to real time media delivery. MPEG Media Transport (MMT) [ISO/IEC 23008-1] is a new standard suite of protocols designed to work with demanding, real-time interactive multimedia applications, typically in the context of one-to-one and one-to-many communication. In this paper, we identify the augmentations that are required for the many-to-many nature of CAR/CAV applications and propose MMT+AVR as a middle ware solution for use in CAV applications. Through an example CAV application implemented on top of MMT+AVR, we show how it provides efficient support for developing CAV applications with ease.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122717549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SmoothCache 2.0: CDN-quality adaptive HTTP live streaming on peer-to-peer overlays","authors":"Roberto Roverso, Riccardo Reale, Sameh El-Ansary, Seif Haridi","doi":"10.1145/2713168.2713182","DOIUrl":"https://doi.org/10.1145/2713168.2713182","url":null,"abstract":"In recent years, adaptive HTTP streaming protocols have become the de-facto standard in the industry for the distribution of live and video-on-demand content over the Internet. This paper presents SmoothCache 2.0, a distributed cache platform for adaptive HTTP live streaming content based on peer-to-peer (P2P) overlays. The contribution of this work is twofold. From a systems perspective, to the best of our knowledge, it is the only P2P platform which supports recent live streaming protocols based on HTTP as a transport and the concept of adaptive bitrate switching. From an algorithmic perspective, the system describes a novel set of overlay construction and prefetching techniques that realize: i) substantial savings in terms of the bandwidth load on the source of the stream, and ii) CDN-quality user experience in terms of playback latency and the watched bitrate. In order to support our claims, we conduct a methodical evaluation on thousands of real consumer machines.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121193598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Reddy, Ragnar Langseth, H. Stensland, C. Griwodz, P. Halvorsen, Dag Johansen
{"title":"Scaling virtual camera services to a large number of users","authors":"V. Reddy, Ragnar Langseth, H. Stensland, C. Griwodz, P. Halvorsen, Dag Johansen","doi":"10.1145/2713168.2713189","DOIUrl":"https://doi.org/10.1145/2713168.2713189","url":null,"abstract":"By processing video footage from a camera array, one can easily make wide-field-of-view panorama videos. From the single panorama video, one can further generate multiple virtual cameras supporting personalized views to a large number of users based on only the few physical cameras in the array. However, giving personalized services to large numbers of users potentially introduces both bandwidth and processing bottlenecks, depending on where the virtual camera is processed. In this demonstration, we present a system that address the large cost of transmitting entire panorama video to the end-user where the user creates the virtual views on the client device. Our approach is to divide the panorama into tiles, each encoded in multiple qualities. Then, the panorama video tiles are retrieved by the client in a quality (and thus bit rate) depending on where the virtual camera is pointing, i.e., the video quality of the tile changes dynamically according to the user interaction. Our initial experiments indicate that there is a large potential of saving bandwidth on the cost of trading quality of in areas of the panorama frame not used for the extraction of the virtual view.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131741138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Araújo, J. Chaves, David M. Chen, Roland Angst, B. Girod
{"title":"Stanford I2V: a news video dataset for query-by-image experiments","authors":"A. Araújo, J. Chaves, David M. Chen, Roland Angst, B. Girod","doi":"10.1145/2713168.2713197","DOIUrl":"https://doi.org/10.1145/2713168.2713197","url":null,"abstract":"Reproducible research in the area of visual search depends on the availability of large annotated datasets. In this paper, we address the problem of querying a video database by images that might share some contents with one or more video clips. We present a new large dataset, called Stanford I2V. We have collected more than 3; 800 hours of newscast videos and annotated more than 200 ground-truth queries. In the following, the dataset is described in detail, the collection methodology is outlined and retrieval performance for a benchmark algorithm is presented. These results may serve as a baseline for future research and provide an example of the intended use of the Stanford I2V dataset. The dataset can be downloaded at http://purl.stanford.edu/zx935qw7203.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128268671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video BenchLab: an open platform for realistic benchmarking of streaming media workloads","authors":"Patrick Pegus, E. Cecchet, P. Shenoy","doi":"10.1145/2713168.2723145","DOIUrl":"https://doi.org/10.1145/2713168.2723145","url":null,"abstract":"In this paper, we present an open, flexible and realistic benchmarking platform named Video BenchLab to measure the performance of streaming media workloads. While Video BenchLab can be used with any existing media server, we provide a set of tools for researchers to experiment with their own platform and protocols. The components include a MediaDrop video server, a suite of tools to bulk insert videos and generate streaming media workloads, a dataset of freely available video and a client runtime to replay videos in the native video players of real Web browsers such as Firefox, Chrome and Internet Explorer. We define simple metrics that are able to capture the quality of video playback and identify issues that can happen during video replay. Finally, we provide a Dashboard to manage experiments, collect results and perform analytics to compare performance between experiments. We present a series of experiments with Video BenchLab to illustrate how the video specific metrics can be used to measure the user perceived experience in real browsers when streaming videos. We also show Internet scale experiments by deploying clients in data centers distributed all over the globe. All the software, datasets, workloads and results used in this paper are made freely available on SourceForge for anyone to reuse and expand.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":" 487","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113946732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Congestion-aware MAC layer adaptation to improve video teleconferencing over wi-fi","authors":"Wei Chen, Liangping Ma, Chien-Chung Shen","doi":"10.1145/2713168.2713173","DOIUrl":"https://doi.org/10.1145/2713168.2713173","url":null,"abstract":"In wireless networks such as those based on IEEE 802.11, packet losses due to fading and interference are often misinterpreted as indications of congestion, causing unnecessary decrease in the data sending rate due to congestion control at higher layer protocols. For delay-constrained applications such as video teleconferencing, packet losses may result in excessive artifacts or freeze in the decoded video. We propose a simple and yet effective mechanism to detect and reduce channel-caused packet losses by adjusting the retry limit parameter of the IEEE 802.11 protocol. Since retry limit is left configurable in the IEEE 802.11 standard, and does not require cross-layer coordination, our scheme can be easily implemented and incrementally deployed. Experimental results of applying the proposed scheme to a WebRTC-based realtime video communication prototype show significant performance gain compared to the case where retry limit is configured statically.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124276923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"YouTube live and Twitch: a tour of user-generated live streaming systems","authors":"Karine Pires, G. Simon","doi":"10.1145/2713168.2713195","DOIUrl":"https://doi.org/10.1145/2713168.2713195","url":null,"abstract":"User-Generated live video streaming systems are services that allow anybody to broadcast a video stream over the Internet. These Over-The-Top services have recently gained popularity, in particular with e-sport, and can now be seen as competitors of the traditional cable TV. In this paper, we present a dataset for further works on these systems. This dataset contains data on the two main user-generated live streaming systems: Twitch and the live service of YouTube. We got three months of traces of these services from January to April 2014. Our dataset includes, at every five minutes, the identifier of the online broadcaster, the number of people watching the stream, and various other media information. In this paper, we introduce the dataset and we make a preliminary study to show the size of the dataset and its potentials. We first show that both systems generate a significant traffic with frequent peaks at more than 1 Tbps. Thanks to more than a million unique uploaders, Twitch is in particular able to offer a rich service at anytime. Our second main observation is that the popularity of these channels is more heterogeneous than what have been observed in other services gathering user-generated content.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116014129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth-disparity calibration for augmented reality on binocular optical see-through displays","authors":"Wanmin Wu, I. Tosic, K. Berkner, N. Balram","doi":"10.1145/2713168.2713171","DOIUrl":"https://doi.org/10.1145/2713168.2713171","url":null,"abstract":"We present a study of depth-disparity calibration for augmented reality applications using binocular optical see-through displays. Two techniques were proposed and compared. The \"paired-eyes\" technique leverages the Panum's fusional area to help viewer find alignment between the virtual and physical objects. The \"separate-eyes\" technique eliminates the need of binocular fusion and involves using both eyes sequentially to check the virtual-physical object alignment on retinal images. We conducted a user study to measure the calibration results and assess the subjective experience of users with the proposed techniques.","PeriodicalId":202494,"journal":{"name":"Proceedings of the 6th ACM Multimedia Systems Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125272549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}