{"title":"Fifty shades of HDR","authors":"A. Chalmers, B. Karr, R. Suma, K. Debattista","doi":"10.1109/DMIAF.2016.7574902","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574902","url":null,"abstract":"From relatively unknown, just 5 years ago, High Dynamic Range (HDR) video is now having a major impact on most aspects of imaging. Although one of the five components of the specification for UHDTV, ITU-R Recommendation BT.2020 in 2012, it is only when it became apparent that HDR could help accelerate the slow penetration of 4K into the TV and home-cinema market, that HDR suddenly started to gain significant attention. But what exactly is HDR? Dynamic range is defined as the difference between the largest and smallest useable signal. In photography this has meant the luminance range of the scene being photographed. However, as HDR grows as a “marketing tool” this definition is becoming less “black & white”. This paper considers the different ways in which the term HDR is now being exploited; the challenges of achieving a complete efficient HDR pipeline from capture to display for a variety of applications; and, what could be done to help ensure HDR algorithms are future proof as HDR technology rapidly improves.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115062331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multisensory datafusion-based 3D plank-coaching system","authors":"Longyu Zhang, Haiwei Dong, Abdulmotaleb El Saddik","doi":"10.1109/DMIAF.2016.7574922","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574922","url":null,"abstract":"Exercises have become an important part of many people's daily life. However, inappropriate exercises cannot help exercisers build desired muscles, and may even get them hurt. In this paper, we proposed a 3D plank-coaching system to guide plank exercisers through fusing multimodal data and providing proper haptic feedback to them. Our proposed system can measure and fuse the user's muscle movements, posture, and several physiological parameters information, which are collected from Electromygraphy (EMG) sensors, Kinect v2 sensors, Electrocardiography (ECG) sensors, and thermometer. Based on comparison results between the user's fused data and the standard data, our system can activate the corresponding vibrotactile actuators, embedded in our developed haptic clothes, to suggest the correct adjustments. As a result, our system can improve the user' plank exercise quality and decrease his/her injury risk.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123244186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Clustering to categorize desirability in software: Exploring cluster analysis of Product Reaction Cards in a stereoscopic retail application","authors":"Diego González-Zúñiga, J. Carrabina","doi":"10.1109/DMIAF.2016.7574931","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574931","url":null,"abstract":"We perform clustering over the Product Reactions Cards in order to group the terms in 8 groups and 41 subgroups. A card sorting exercise among software engineers and UX practitioners was performed in order to achieve this. We present a preliminary case study using this classification with a retailing application in order to see the difference in connotations that result introducing stereoscopic depth to the GUI.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127687952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christos Varytimidis, Georgios Tsatiris, Konstantinos Rapantzikos, S. Kollias
{"title":"The Mecanex system for Multimedia Content Annotation","authors":"Christos Varytimidis, Georgios Tsatiris, Konstantinos Rapantzikos, S. Kollias","doi":"10.1109/DMIAF.2016.7574916","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574916","url":null,"abstract":"A system for efficient multimedia content analysis and automatic annotation is presented in this paper. The system is able to identify objects in videos and annotate them with metadata. It includes three modules: the first provides detection and recognition of faces; the second provides generic object detection, based on a deep convolutional neural network; the third provides automated location estimation and landmark recognition based on the state-of-the-art technologies of Bag-of-Words and RANSAC. The system has been developed and successfully tested in the framework of the EC Horizon 2020 Mecanex project, targeting advertising and campaign production markets.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125777713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huyen T. T. Tran, N. P. Ngoc, T. Thang, Yong Man Ro
{"title":"Real-time quality evaluation of adaptation strategies in VoD streaming","authors":"Huyen T. T. Tran, N. P. Ngoc, T. Thang, Yong Man Ro","doi":"10.1109/DMIAF.2016.7574936","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574936","url":null,"abstract":"HTTP Adaptive Streaming (HAS) has become a popular trend for multimedia delivery nowadays. Because of throughput variations, video adaptation methods are needed to avoid buffer underflows. In this context, it is also important to evaluate the overall video quality of a session. In this paper, we investigate a quality model that can evaluate a session quality as well as different adaptation strategies in real time. We use the histogram of segment quality values and the histogram of quality gradients in a session to model the overall video quality. Then, our quality model is employed to evaluate, for the first time, the cumulative quality of typical adaptation methods in real time. It is found that, to provide a high quality level, the client should avoid changing versions frequently and drastically.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131451650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Juhra, Martin Hernandez, K. Ho, A. Kushniruk, E. Borycki
{"title":"The health policy guidance and practice of introducing technologies in health system in Europe","authors":"C. Juhra, Martin Hernandez, K. Ho, A. Kushniruk, E. Borycki","doi":"10.1109/DMIAF.2016.7574897","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574897","url":null,"abstract":"Modern technology has so far been adopted into the various European health systems at different stages. This has lead to a number of recommendations from the EU at a national and at an EU level [1]. In Germany, a “law on safe communication and applications in healthcare” (so called eHealth-law) has been passed by the German government in December 2015 [2]. It aims to accelerate the introduction of eHealth in Germany and includes mandatory milestones. The key aspects are an emergency health record, which must be implemented by 2018, an electronic medication plan, an electronic exchange of medical data and an EHR, which has to be implemented by the end of 2018. However, it remains arguable if such a law will be enough to facilitate the further development of eHealth.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133411893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Peak-end effects in video Quality of Experience","authors":"M. Chignell, L. Zucherman, D. Kaya, Jie Jiang","doi":"10.1109/DMIAF.2016.7574939","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574939","url":null,"abstract":"The study reported in this paper demonstrated, for the first time, that the peak-end effect, commonly found with respect to memories of experience, also applies to overall Quality of Experience (QoE) measures obtained after participants view a sequence of videos. Sequences of videos shown in an experiment varied according to the sequencing and grouping of videos with better or worse Technical Quality (TQ). An end effect was found for both better TQ and worse TQ videos. However the peak effect was found for bad, but not good, videos. These results provide an important first step towards the development of models of Likelihood to Recommend (L2R) based on accumulated experience with a service.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124904077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic light compositing using rendered images","authors":"Matis Hudon, R. Cozot, K. Bouatouch","doi":"10.1109/DMIAF.2016.7574927","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574927","url":null,"abstract":"Lighting is a key element in photography. Professional photographers often work with complex lighting setups to directly capture an image close to the targeted one. Some photographers reversed this traditional workflow. Indeed, they capture the scene under several lighting conditions, then combine the captured images to get the expected one. Acquiring such a set of images is a tedious task and combining them requires some skill in photography. We propose a fully automatic method, that renders, based on a 3D reconstructed model (shape and albedo), a set of images corresponding to several lighting conditions. The resulting images are combined using a genetic optimization algorithm to match the desired lighting provided by the user as an image.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125823788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A time-aware approach for boosting medical records search","authors":"Jiayue Zhang, Weiran Xu, Jun Guo","doi":"10.1109/DMIAF.2016.7574910","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574910","url":null,"abstract":"Medical records are collections of documents recording a patient's changing conditions, exhibiting temporal characteristic. Yet previous works on medical records search did not pay attention to it. We propose to model the medical records as sequential data, and utilize the temporal similarity between them to improve the performance of medical records search. In this paper, we propose a Temporal Bag-of-Words model to represent medical records as document sequence. In which framework, we adopt Dynamic Time Warping algorithm to calculate the temporal similarity between sequences. Then a clustering-based combination method is proposed for re-ranking. Experiments on TREC Medical Track data shows the effectiveness of the proposed framework for boosting medical records search.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"217 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130014162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christos Avgerinos, N. Nikolaidis, V. Mygdalis, I. Pitas
{"title":"Feature extraction and statistical analysis of videos for cinemetric applications","authors":"Christos Avgerinos, N. Nikolaidis, V. Mygdalis, I. Pitas","doi":"10.1109/DMIAF.2016.7574926","DOIUrl":"https://doi.org/10.1109/DMIAF.2016.7574926","url":null,"abstract":"In this paper, we describe a framework for the extraction of low-level and high level information from movies in order to be used for cinemetric applications. The developed framework analyses the available video content and extracts characteristics related to color, motion, contrast, shot length, tempo, face to frame ratios etc. The extracted information is stored in MPEG 7 AVDP profile format, which is a standard description format that can be imported to related cinemetric applications. We applied the developed framework in a collection of downloaded videos, as well as 3 stereoscopic movies.","PeriodicalId":404025,"journal":{"name":"2016 Digital Media Industry & Academic Forum (DMIAF)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127797569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}