S. K. F. Yu, Y. Chan, D. Lun, Chi Wang Jeffrey Chan, Kai Wang Kenneth Li
{"title":"Colorblind-friendly Halftoning","authors":"S. K. F. Yu, Y. Chan, D. Lun, Chi Wang Jeffrey Chan, Kai Wang Kenneth Li","doi":"10.23919/EUSIPCO.2018.8553352","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553352","url":null,"abstract":"Most images are natural images and most of them are still delivered in printed form nowadays. Halftoning is a critical process for printing natural color images. However, conventional colorblind aids do not make use of the halftoning process directly to produce colorblind-friendly image prints. In this paper, a halftoning algorithm is proposed to reduce the color distortion of an image print in the view of a colorblind person and embed hints in the image print for a colorblind person to distinguish the confusing colors. To people with normal vision, the color halftone looks the same as the original image when it is viewed at a reasonable distance, which is not achievable when conventional techniques such as recoloring and pattern overlaying are used to produce a color print for the colorblind. Besides, no dedicated hardware is required to view the printed image.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126884190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active Learning for One-Class Classification Using Two One-Class Classifiers","authors":"Patrick Schlachter, Bin Yang","doi":"10.23919/EUSIPCO.2018.8552958","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8552958","url":null,"abstract":"This paper introduces a novel, generic active learning method for one-class classification. Active learning methods play an important role to reduce the efforts of manual labeling in the field of machine learning. Although many active learning approaches have been proposed during the last years, most of them are restricted on binary or multi-class problems. One-class classifiers use samples from only one class, the so-called target class, during training and hence require special active learning strategies. The few strategies proposed for one-class classification either suffer from their limitation on specific one-class classifiers or their performance depends on particular assumptions about datasets like imbalance. Our proposed method bases on using two one-class classifiers, one for the desired target class and one for the so-called outlier class. It allows to invent new query strategies, to use binary query strategies and to define simple stopping criteria. Based on the new method, two query strategies are proposed. The provided experiments compare the proposed approach with known strategies on various datasets and show improved results in almost all situations.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123160310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oscar J. Urizar, L. Marcenaro, C. Regazzoni, E. Barakova, G.W.M. Rauterberg
{"title":"Emotion Estimation in Crowds: The Interplay of Motivations and Expectations in Individual Emotions","authors":"Oscar J. Urizar, L. Marcenaro, C. Regazzoni, E. Barakova, G.W.M. Rauterberg","doi":"10.23919/EUSIPCO.2018.8553370","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553370","url":null,"abstract":"Providing an estimation of the emotional states of individuals increases the insights on the state of a crowd beyond simple normal/abnormal situations or behaviour classification. Methods intended for identifying emotions in individuals are mainly based on facial and body expressions, or even physiological measurements which are not suited for crowded environments as the available information in crowds is usually limited to that provided by surveillance cameras where the face and body of pedestrians can often suffer from occlusion. This work proposes an approach for analysing walking behaviour and exploiting the interplay of motivations and expectations in the emotions of pedestrians. Real-world data is used to test the prediction of motivations and annotations on the emotional state of pedestrians are added to evaluate the proposed method's capability to estimate emotional states. The conducted experiments show significant improvements over previous methods for estimating motivations and consistent results to the estimation of emotions.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121572278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Skeleton-Based Action Recognition Based on Deep Learning and Grassmannian Pyramids","authors":"D. Konstantinidis, K. Dimitropoulos, P. Daras","doi":"10.23919/EUSIPCO.2018.8553163","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553163","url":null,"abstract":"Ahstract- The accuracy of modern depth sensors, the robustness of skeletal data to illumination variations and the superb performance of deep learning techniques on several classification tasks have sparkled a renewed intereste towards skeleton-based action recognition. In this paper, we propose a four-stream deep neural network based on two types of spatial skeletal features and their corresponding temporal representations extracted by the novel Grassmannian Pyramid Descriptor (GPD). The performance of the proposed action recognition methodology is further enhanced by the use of a meta-learner that takes advantage of the meta knowledge extracted from the processing of the different features. Experiments on several well-known action recognition datasets reveal that our proposed methodology outperforms a number of state-of-the-art skeleton-based action recognition methods.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122564246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Fast Eigen-Based Signal Combining Algorithm by Using CORDIC","authors":"Leiou Wang, Donghui Wang, C. Hao","doi":"10.23919/EUSIPCO.2018.8553406","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553406","url":null,"abstract":"For reliable reception of weak signals, eigen-based signal combining algorithms are very effective. However, the algorithms involve a heavy computational burden. In this paper, a fast eigen-based signal combining algorithm is proposed by using the coordinate rotation digital computer (CORDIC) method. CORDIC can use addition and bitshift operations to replace the multiplications in the eigen-based signal combining algorithms. Simulation results indicate that the proposed algorithm can reduce the computational cost while it provides a good combining performance.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126804829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Haasnoot, J. S. Barnhoorrr, L. Spreeuwers, R. Veldhuis, W. Verwey
{"title":"Towards understanding the effects of practice on behavioural biometric recognition performance","authors":"E. Haasnoot, J. S. Barnhoorrr, L. Spreeuwers, R. Veldhuis, W. Verwey","doi":"10.23919/EUSIPCO.2018.8553446","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553446","url":null,"abstract":"Behavioural biometrics looks at discriminative features of a person's measurable behaviour, which is known to show high variance over long stretches of time. In psychology, a significant portion of this behavioural variance is explained by an individual improving their skill at performing behaviours, mostly through practice. Understanding what the effects of practice are on biometric recognition performance should allow us to account for much of this variance, as well as make individual behavioural biometric studies easier to compare [15]. We hypothesize that more accumulated practice will lead to both more stable and increased recognition performance. We argue that these are significant effects and show that practice in general is under-investigated. We introduce a novel method of analysis, the Start-to-Train Interval (STI)/Train-to-Test Interval (TTI) contour plot, which allows for systematic investigation of how recognition performance develops under increased practice. We applied this method to three data sets of a Discrete Sequence Production (DSP) task, a task that consists of repeatedly (500+ times) typing in a simple password, and found that more practice both significantly increases recognition performance and makes it more stable. These findings call for further investigation into the effects of practice on recognition performance for more standard behavioural biometric paradigms.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126920483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual camera modeling for multi-view simulation of surveillance scenes","authors":"N. Bisagno, N. Conci","doi":"10.23919/EUSIPCO.2018.8553409","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553409","url":null,"abstract":"A recent trend in research is to leverage on advanced simulation frameworks for the implementation and validation of video surveillance and ambient intelligence algorithms. However, in order to guarantee a seamless transferability between the virtual and real worlds, the simulator is required to represent the real-world target scenario in the best way possible. This includes on the one hand the appearance of the scene and the motion of objects, and, on the other hand, it should be accurate with respect to the sensing equipment that will be used in the acquisition phase. This paper focuses on the latter problem related to camera modeling and control, discussing how noise and distortions can be handled, and implementing an engine for camera motion control in terms of pan, tilt, and zoom, with particular attention to the video surveillance scenario.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116084215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Tire Footprint Segmentation","authors":"R. Nava, D. Fehr, F. Petry, T. Tamisier","doi":"10.23919/EUSIPCO.2018.8553041","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553041","url":null,"abstract":"Quantitative image-based analysis is a relatively new way to address challenges in automotive tribology. Its inclusion in tire-ground interaction research may provide innovative ideas for improvements in tire design and manufacturing processes. In this article we present a novel and robust technique for segmenting the area of contact between the tire and the ground. The segmentation is performed in an unsupervised fashion with Graph cuts. Then, superpixel adjacency is used to improve the boundaries. Finally, a rolling circle filter is applied to the segmentation to generate a mask that covers the area of contact. The procedure is carried out on a sequence of images captured in an automatic test machine. The estimated shape and total area of contact are built by averaging all the masks that have computed throughout the sequence. Since a ground-truth is not available, we also propose a comparative method to assess the performance of our proposal.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116638828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Playlist-Based Tag Propagation for Improving Music Auto-Tagging","authors":"Yi-Hsun Lin, Chia-Hao Chung, Homer H. Chen","doi":"10.23919/EUSIPCO.2018.8553318","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553318","url":null,"abstract":"The performance of a music auto-tagging system highly relies on the quality of the training dataset. In particular, each training song should have sufficient relevant tags. Tag propagation is a technique that creates additional tags for a song by passing the tags from other similar songs. In this paper, we present a novel tag propagation approach that exploits the song coherence of a playlist to improve the training of an auto-tagging model. The main idea is to share the tags between neighboring songs in a playlist and to optimize the auto-tagging model through a multi-task objective function. We test the proposed playlist-based approach on a convolutional neural network for music auto-tagging and show that it can indeed provide a significant performance improvement.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116668007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modelling of Sound Events with Hidden Imbalances Based on Clustering and Separate Sub-Dictionary Learning","authors":"Chaitanya Narisetty, Tatsuya Komatsu, Reishi Kondo","doi":"10.23919/EUSIPCO.2018.8553387","DOIUrl":"https://doi.org/10.23919/EUSIPCO.2018.8553387","url":null,"abstract":"This paper proposes an effective modelling of sound event spectra with a hidden data-size-imbalance, for improved Acoustic Event Detection (AED). The proposed method models each event as an aggregated representation of a few latent factors, while conventional approaches try to find acoustic elements directly from the event spectra. In the method, all the latent factors across all events are assigned comparable importance and complexity to overcome the hidden imbalance of data-sizes in event spectra. To extract latent factors in each event, the proposed method employs clustering and performs non-negative matrix factorization to each latent factor, and learns its acoustic elements as a sub-dictionary. Separate sub-dictionary learning effectively models the acoustic elements with limited data-sizes and avoids over-fitting due to hidden imbalances in training data. For the task of polyphonic sound event detection from DCASE 2013 challenge, an AED based on the proposed modelling achieves a detection F-measure of 46.5%, a significant improvement of more than 19% as compared to the existing state-of-the-art methods.","PeriodicalId":303069,"journal":{"name":"2018 26th European Signal Processing Conference (EUSIPCO)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}