{"title":"Video Transmission Artifacts Detection Using No-Reference Approach","authors":"M. Vranješ, M. Herceg, D. Vranješ, Denis Vajak","doi":"10.1109/ZINC.2018.8448669","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448669","url":null,"abstract":"In real-time (RT) applications that include transmission of digital video, different artifacts (caused by compression and transmission processes) can appear in video received at the end-user side. In order to ensure high level of end-user Quality of Experience (QoE), video application/service providers have to continuously measure and monitor the quality of perceived video. Since in RT video applications uncompressed video is unavailable at the receiver side, artifacts detection as well as video quality assessment (VQA) are often performed using no-reference (NR) approach. In this paper we present a novel NR algorithm that efficiently detects packet loss (PL) artifacts in received video frames, called Packet Loss Detection Algorithm (PLDA). The proposed PLDA operates only on pixel values of the processed video frame and it requires no additional information about processed video. The performance of the proposed PLDA is compared to that of other existing PL detection algorithm on video sequences of significantly different content, in which distinct error concealment methods are used to conceal errors caused by PL. The results show that PLDA outperforms other tested algorithm when detecting PL artifacts in network transmitted video and that it is very robust in terms of different content types and error concealment methods. Additionally, proposed PLDA is capable of processing up to 25 frames per second (FPS) of Full HD video in RT and thus it is suitable for usage in RT video transmitting applications.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116985480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"E2LP Extension Board for Teaching Basic Digital Electronics","authors":"F. Susac, T. Birka, T. Matić","doi":"10.1109/ZINC.2018.8448943","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448943","url":null,"abstract":"At the Faculty of Electrical Engineering, Computer Science and Information Technology Osijek on the course Digital Electronics students are learning the basics of digital electronics and VHDL with unified embedded engineering learning platform (E2LP). E2LP is designed for teaching embedded computer systems courses and system design courses, and therefore lacks basic digital I/O devices such as 16 switches, RGB LEDs, etc. This paper presents E2LP extension board developed with the aim of adding simple I/O devices to the existing E2LP system. The extension board is successfully tested, and several student examples are given.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114626060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using GANs to Enable Semantic Segmentation of Ranging Sensor Data","authors":"V. Lekic, Z. Babic","doi":"10.1109/ZINC.2018.8448963","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448963","url":null,"abstract":"Ranging sensors, such as radar and lidar, onboard the vehicle are considered to be very robust under changing environmental conditions. Largely owing to this reputation, they have found broad applicability in driver assistance, and consequently in autonomous driving systems. On the other hand, they lack precision. This makes classification tasks of the measurement data rather difficult. In this paper, we propose a method for semantic segmentation of the ranging sensors data using generative adversarial networks. Utilizing the fully unsupervised learning algorithm, we convert the sensor data to artificial, camera-like, environmental images that are further used as input for semantic image segmentation algorithms.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115738677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Prodanov, Marija Punt, N. Miljković, Z. Radivojević
{"title":"Software Module for Processing EEG Signals in a Biofeedback Based System","authors":"M. Prodanov, Marija Punt, N. Miljković, Z. Radivojević","doi":"10.1109/ZINC.2018.8448592","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448592","url":null,"abstract":"This paper presents the implementation of a software module applied to the field of psychophysiology that examines the causal link between the physiological parameters and the psychological state of a person. The implemented module provides the ability to process and display EEG signals in realtime and is part of a larger software system based on the biofeedback method. Signals can be displayed either in the time or frequency domain and can be subjected to a variety of processing functions. The aim of this module is to provide a reliable tool for processing and displaying brain waves in arbitrary frequency bands and to offer a platform for biofeedback training implementations that can enhance the degree of attention, concentration, relaxation and learning ability in the person participating in the training.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133617485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Belgiovine, M. Capecci, L. Ciabattoni, M. C. Fiorentino, A. Montcriù, L. Pepa, L. Romeo
{"title":"Upper Limbs Dyskinesia Detection and Classification for Patients with Parkinson's Disease based on Consumer Electronics Devices","authors":"G. Belgiovine, M. Capecci, L. Ciabattoni, M. C. Fiorentino, A. Montcriù, L. Pepa, L. Romeo","doi":"10.1109/ZINC.2018.8448846","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448846","url":null,"abstract":"This paper presents a L-dopa-Induced Dyskinesia Detection and Classification System based on Machine Learning Algorithms, wearable device (smartwatch) data and a smart-phone, connected via Bluetooth. This system was developed in three steps. The first step is the data collection, where each patient wears the smartwatch and performs some tasks while the smart-phone App captures data. These performed tasks are of different nature (i.e., writing, walking, sitting and cognitive task). In the second phase, some features were extracted from acceleration and angular velocity signals and a Z-score normalization is applied. In the last step two Machine Learning Algorithms, trained with these features as input, are used in order to detect and classify dyskinesias.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131934565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Ciabattoni, G. Foresi, A. Monteriù, D. P. Pagnotta, L. Tomaiuolo
{"title":"Fall Detection System by Using Ambient Intelligence and Mobile Robots","authors":"L. Ciabattoni, G. Foresi, A. Monteriù, D. P. Pagnotta, L. Tomaiuolo","doi":"10.1109/ZINC.2018.8448970","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448970","url":null,"abstract":"In this paper a robust Fall Detection Algorithm by using a deep learning approach and a low-cost mobile robot equipped with an RGB camera is presented. This method consists of four steps. The first step is the user detection, achieved by a real-time video stream and a Deep Learning approach. Once the user is detected, then its position is estimated in the second step. In the third step, if a fall is detected, a photo is acquired and a pre-registered audio message asks the user how he is. In the last step the photo and the audio captured are sent to a Telegram Bot (TB) in order to alert family members or caregivers. Tests have been performed in a real scenario.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134409516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementation of PC Application for Controlling RT-AG External Sound Card","authors":"Nenad Pekez, Nives Kaprocki, J. Kovacevic","doi":"10.1109/ZINC.2018.8448593","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448593","url":null,"abstract":"Digital audio systems have developed rapidly and significantly over the last four decades. Beginning with systems that could process one audio channel at 32kHz/13-bit resolution to today's systems such as AV receivers and sound bars which are capable of reproducing more than 15 audio channels at 192kHz/32-bit resolutions. Problems that audio engineers are facing during development and testing phases of these systems are delivering high quality multi-channel signals from PC to audio systems and vice versa - recording 15-channel outputs at high sample rates. PC's sound card is most often not capable of neither delivering nor recording these kind of audio signals. In practice, audio engineers use external sound cards with features that enable them to play and record such complex audio. RT-RK R&D Institute has developed an external sound card for these purposes named RT-AG (RT-Audio Grabber). In this paper, we represent the implementation of PC application that is used for controlling this sound card. Since PC and RT-AG can be interconnected by either Ethernet or USB, the focus is on implementation of Ethernet part of application. This study continues to a paper “The technical solution of the software architecture of RT-AG digital grabber/player device” published at ETRAN 2017 conference.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130492876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementation of the Sound Classification Module on the Platform with Limited Resources","authors":"Nives Kaprocki, Nenad Pekez, J. Kovacevic","doi":"10.1109/ZINC.2018.8448512","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448512","url":null,"abstract":"There is a growing trend of using algorithms based on deep and machine learning in consumer devices, which imposes a challenge to the system's development because of the limited amount of resources in an embedded device. This paper presents integration of the sound classification module based on machine learning into a home audio system. The additional module enables dynamic change of processing controls according to the resulting confidence scores which indicate whether the current audio is speech, music or background noise. Main challenge of this paper is overcoming real-time processing constraints and embedded system's resource limitations. Results show that the sound classification module has been successfully integrated and produces the correct output.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124434842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iuliana Marin, I. Pavaloiu, N. Goga, Melania Nitu, Ionut-Catalin Draghici
{"title":"Ambient Assisted Control using Smart Luminaires","authors":"Iuliana Marin, I. Pavaloiu, N. Goga, Melania Nitu, Ionut-Catalin Draghici","doi":"10.1109/ZINC.2018.8448990","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448990","url":null,"abstract":"The population aging, the need to reduce the cost of elderly care, as well as to increase security and energy efficiency have led to the emergence of many home monitoring systems. However, most of these systems are expensive, are not standardized, are complicated to implement (individualized wiring, wireless nodes) and can only be used with qualified personnel. Our light bulb prototype aims to reduce such deficiencies as well as remove the obstacles to adopting such systems by reducing costs and simplifying installation. The proposed solution aims to capitalize on existing electrical and lighting equipment, to set up a monitoring system only by replacing the luminaires in the existing electrical system with other lighting sources developed specifically for this purpose. Through the periodic replacement process and based on the current trend in light-emitting diode usage, these smart luminaires can be gradually introduced to supervise the location within houses, mostly elders' homes who live alone.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116440595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating Android to Next Generation Vehicles","authors":"N. Pajić, M. Bjelica","doi":"10.1109/ZINC.2018.8448709","DOIUrl":"https://doi.org/10.1109/ZINC.2018.8448709","url":null,"abstract":"Modernization of the automotive industry has contributed to the new technological development that ensures greater driver's safety and comfort. Vehicle systems that provide entertainment and information content, integrated into the digital cockpit, are representatives of a new generation of multimedia systems. Availability of Android OS on most of the modern portable devices and usage of existing and user accepted applications encourage automotive industry to integrate those systems into their products. Even though Android is a widely used system in the consumer electronics world, there are only a few of these solutions in the automotive industry. Safety levels, fast boot and memory usage are the biggest challenges. In this paper we have presented a solution of vehicle infotainment based on the Android system. To provide the safety levels, we have presented a concept of integrating two different systems (Android and QNX) into one chip.","PeriodicalId":366195,"journal":{"name":"2018 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116360186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}