WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661731
F. A. López-Fuentes, Carlos Alberto Orta-Cruz
{"title":"A Secure P2P Architecture for Video Distribution","authors":"F. A. López-Fuentes, Carlos Alberto Orta-Cruz","doi":"10.1145/2661714.2661731","DOIUrl":"https://doi.org/10.1145/2661714.2661731","url":null,"abstract":"Currently, video demand has increased significantly, as well as sites that provide this type of service. There is also a wide range of devices from which video requests are performed. Due to these facts, video providers use different video coding techniques that allow adapting the video both to variable conditions of the network as to heterogeneity of devices. A vast majority of video applications are based on the client-server model, which means that system maintenance is very expensive. An alternative to client-server model are the P2P (peer-to-peer) networks, which have attractive features for broadcast video, such as scalability and low cost of implementation. However, a major limitation to the use of P2P infrastructures for content distribution is security. This is because most of the sites do not consider authentication methods and contents protection. This paper proposes a P2P architecture for video distribution using scalable video coding techniques and security strategies such as encryption and authentication.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125125977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661726
Denny Stohr, Stefan Wilk, W. Effelsberg
{"title":"Monitoring of User Generated Video Broadcasting Services","authors":"Denny Stohr, Stefan Wilk, W. Effelsberg","doi":"10.1145/2661714.2661726","DOIUrl":"https://doi.org/10.1145/2661714.2661726","url":null,"abstract":"Mobile video broadcasting services offer users the opportunity to instantly share content from their mobile handhelds to a large audience over the Internet. However, existing data caps in cellular network contracts and limitations in their upload capabilities restrict the adoption of mobile video broadcasting services. Additionally, the quality of those video streams is often reduced by the lack of skills of recording users and the technical limitations of the video capturing devices. Our research focuses on large-scale events that attract dozens of users to record video in parallel. In many cases, available network infrastructure is not capable to upload all video streams in parallel. To make decisions on how to appropriately transmit those video streams, a suitable monitoring of the video generation process is required. For this scenario, a measurement framework is proposed that allows Internet-scale mobile broadcasting services to deliver samples in an optimized way. Our framework architecture analyzes three zones for effectively monitoring user-generated video. Besides classical Quality of Service metrics on the network state, video quality indicators and additional auxiliary sensor information is gathered. Aim of this framework is an efficient coordination of devices and their uploads based on the currently observed system state.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117309148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661720
Shenggao Zhu, Jingli Cai, Jiangang Zhang, Zhonghua Li, Ju-Chiang Wang, Ye Wang
{"title":"Bridging the User Intention Gap: an Intelligent and Interactive Multidimensional Music Search Engine","authors":"Shenggao Zhu, Jingli Cai, Jiangang Zhang, Zhonghua Li, Ju-Chiang Wang, Ye Wang","doi":"10.1145/2661714.2661720","DOIUrl":"https://doi.org/10.1145/2661714.2661720","url":null,"abstract":"Music is inherently abstract and multidimensional. However, existing music search engines are usually not convenient or too complicated for users to create multidimensional music queries, leading to the intention gap between users' music information needs and the input queries. In this paper, we present a novel content-based music search engine, the Intelligent & Interactive Multidimensional mUsic Search Engine (i2MUSE), which enables users to input music queries with multiple dimensions efficiently and effectively. Six musical dimensions, including tempo, beat strength, genre, mood, instrument, and vocal, are explored in this study. Users can begin a query from any dimension and interact with the system to organize the query. Once the parameters of some dimensions have been set, i2MUSE is able to intelligently highlight a suggested parameter and gray out an un-suggested parameter for every other dimension, helping users express their music intentions and avoid parameter conflicts in the query. In addition, i2MUSE provides a real-time illustration of the percentage of matched tracks in the database. Users can also set the relative weight of each specified dimension. We have conducted a pilot user study with 30 subjects and validated the effectiveness and usability of i2MUSE.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121383196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661721
M. Saini, Fatimah Al-Zamzami, Abdulmotaleb El Saddik
{"title":"Towards Storytelling by Extracting Social Information from OSN Photo's Metadata","authors":"M. Saini, Fatimah Al-Zamzami, Abdulmotaleb El Saddik","doi":"10.1145/2661714.2661721","DOIUrl":"https://doi.org/10.1145/2661714.2661721","url":null,"abstract":"The popularity of online social networks (OSNs) is growing rapidly over time. People share their experiences with their friends and relatives with the help of multimedia such as image, video, text, etc. The amount of such shared multimedia is also growing likewise. The large amount of multimedia data on OSNs contains in it a snapshot of user's life. This social network data can be crawled to build stories about individuals. However, the information needed for a story, such as events and pictures, is not fully available on user's own profile. While part of this information can be retrieved from user's own timeline, a large amount of event and multimedia information is only available on friend's profiles. As the number of friends can be very large, in this work we focus on identifying subset of friends for enriching the story data. In this paper we explore social relationships from multimedia perspective and propose a framework to build stories using information from multiple-profiles. To the best of our knowledge, this is the first work on building stories using multiple OSN profiles. The experimental results show that with the proposed method we get more information (events, locations, and photos) about the individuals in comparison to the traditional methods that rely on user's own profile alone.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661729
Maryam Nematollahi, Xiao-Ping Zhang
{"title":"Automatic Video Intro and Outro Detection on Internet Television","authors":"Maryam Nematollahi, Xiao-Ping Zhang","doi":"10.1145/2661714.2661729","DOIUrl":"https://doi.org/10.1145/2661714.2661729","url":null,"abstract":"Content Delivery Networks aim to deliver multimedia content to end-users with high reliability and speed. However, the transmission costs are very high due to large volume of video data. To cost-effectively deliver bandwidth-intensive video data, content providers have become interested in detection of redundant content that most probably are not of user's interest and then providing options for stopping their delivery. In this work, we target intro and outro (IO) segments of a video which are traditionally duplicated in all episodes of a TV show and most viewers fast-forward to skip them and only watch the main story. Using computationally-efficient features such as silence gaps, blank screen transitions and histogram of shot boundaries, we develop a framework that identifies intro and outro parts of a show. We test the proposed intro/outro detection methods on a large number of videos. Performance analysis shows that our algorithm successfully delineates intro and outro transitions, respectively, by a detection rate of 82% and 76% and an average error of less than 2.06 seconds.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129192516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661723
Subhasree Basu, Yi Yu, Roger Zimmermann
{"title":"Student Performance Evaluation of Multimodal Learning via a Vector Space Model","authors":"Subhasree Basu, Yi Yu, Roger Zimmermann","doi":"10.1145/2661714.2661723","DOIUrl":"https://doi.org/10.1145/2661714.2661723","url":null,"abstract":"Multimodal learning, as an effective method to helping students understand complex concepts, has attracted much research interest recently. Our motivation of this work is very intuitive: we want to evaluate student performance of multimodal learning over the Internet. We are developing a system for student performance evaluation which can automatically collect student-generated multimedia data during online multimodal learning and analyze student performance. As our initial step, we propose to make use of a vector space model to process student-generated multimodal data, aiming at evaluating student performance by exploring all annotation information. In particular, the area of a study material is represented as a 2-dimensional grid and predefined attributes form an attribute space. Then, annotations generated by students are mapped to a 3-dimensional indicator matrix, 2-dimensions corresponding to object positions in the grid of the study material and a third dimension recording attributes of objects. Then, recall, precision and Jaccard index are used as metrics to evaluate student performance, given the teacher's analysis as the ground truth. We applied our scheme to real datasets generated by students and teachers in two schools. The results are encouraging and confirm the effectiveness of the proposed approach to student performance evaluation in multimodal learning.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128164586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661724
Yi Yu, Suhua Tang, Roger Zimmermann, K. Aizawa
{"title":"Empirical Observation of User Activities: Check-ins, Venue Photos and Tips in Foursquare","authors":"Yi Yu, Suhua Tang, Roger Zimmermann, K. Aizawa","doi":"10.1145/2661714.2661724","DOIUrl":"https://doi.org/10.1145/2661714.2661724","url":null,"abstract":"Location-based social networking platform (e.g., Foursquare), as a popular scenario of participatory sensing system that collects heterogeneous information (such as tips and photos) of venues from users, has attracted much attention recently. In this paper, we study the distribution of these information and their relationship, based on a large dataset crawled from Foursquare, which consists of 2,728,411 photos, 1,212,136 tips and 148,924,749 check-ins of 190,649 venues, contributed by 508,467 users. We analyze the distribution of user-generated check-ins, venue photos and venue tips, and show interesting category patterns and correlation among these information. In addition, we make the following observations: i) Venue photos in Foursquare are able to significantly make venues more social and popular. ii) Users share venue photos highly related to food category. iii) Category dynamics of venue photo sharing have similar patterns as that of venue tips and user check-ins at the venues. iv) Users tend to share photos rather than tips. We distribute our data and source codes under the request of research purposes (email: yi.yu.yy@gmail.com).","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115143272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661717
M. Schedl, M. Tkalcic
{"title":"Genre-based Analysis of Social Media Data on Music Listening Behavior: Are Fans of Classical Music Really Averse to Social Media?","authors":"M. Schedl, M. Tkalcic","doi":"10.1145/2661714.2661717","DOIUrl":"https://doi.org/10.1145/2661714.2661717","url":null,"abstract":"It is frequently presumed that lovers of Classical music are not present in social media. In this paper, we investigate whether this statement can be empirically verified. To this end, we compare two social media platforms --- Last.fm and Twitter --- and perform a study on musical preference of their respective users. We investigate two research hypotheses: (i) Classical music fan are more reluctant to use social media to indicate their listing habits than listeners of other genres and (ii) there are correlations between the use of Last.fm and Twitter to indicate music listening behavior. Both hypotheses are verified and substantial differences could be made out for Twitter users. The results of these investigations will help improve music recommendation systems for listeners with non-mainstream music taste.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131043243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661725
Michael Schoeffler, J. Herre
{"title":"The Influence of Audio Quality on the Popularity of Music Videos: A YouTube Case Study","authors":"Michael Schoeffler, J. Herre","doi":"10.1145/2661714.2661725","DOIUrl":"https://doi.org/10.1145/2661714.2661725","url":null,"abstract":"Video-sharing websites like YouTube contain many music videos. On such websites, the audio quality of these music videos can differ from poor to very good since the content is uploaded by users. The results of a previous study indicated that music videos are very popular in general among the users. This paper addresses the question whether the audio quality of music videos has an influence on user ratings. A generic system for measuring the audio quality on video-sharing websites is described. The system has been implemented and was deployed for evaluating the relationship between audio quality and video ratings on YouTube. The analysis of the results indicate that, contrary to popular expectation, the audio quality of music videos has surprisingly little influence on its appreciation by the YouTube user.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"92 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131619003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WISMM '14Pub Date : 2014-11-07DOI: 10.1145/2661714.2661719
Eva Zangerle, M. Pichl, W. Gassler, Günther Specht
{"title":"#nowplaying Music Dataset: Extracting Listening Behavior from Twitter","authors":"Eva Zangerle, M. Pichl, W. Gassler, Günther Specht","doi":"10.1145/2661714.2661719","DOIUrl":"https://doi.org/10.1145/2661714.2661719","url":null,"abstract":"The extraction of information from online social networks has become popular in both industry and academia as these data sources allow for innovative applications. However, in the area of music recommender systems and music information retrieval, respective data is hardly exploited. In this paper, we present the #nowplaying dataset, which leverages social media for the creation of a diverse and constantly updated dataset, which describes the music listening behavior of users. For the creation of the dataset, we rely on Twitter, which is frequently facilitated for posting which music the respective user is currently listening to. From such tweets, we extract track and artist information and further metadata. The dataset currently comprises 49 million listening events, 144,011 artists, 1,346,203 tracks and 4,150,615 users which makes it considerably larger than existing datasets.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124215662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}