2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)最新文献

筛选
英文 中文
Deep learning architectures for tattoo detection and de-identification 纹身检测和去识别的深度学习架构
T. Hrkać, K. Brkić, S. Ribaric, Darijan Marcetic
{"title":"Deep learning architectures for tattoo detection and de-identification","authors":"T. Hrkać, K. Brkić, S. Ribaric, Darijan Marcetic","doi":"10.1109/SPLIM.2016.7528402","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528402","url":null,"abstract":"The widespread use of video recording devices to obtain recordings of people in various scenarios makes the problem of privacy protection increasingly important. Consequently, there is an increased interest in developing methods for de-identification, i.e. removing personally identifying features from publicly available or stored data. Most of related work focuses on de-identifying hard biometric identifiers such as faces. We address the problem of detection and de-identification of soft biometric identifiers - tattoos. We use a deep convolutional neural network to discriminate between tattoo and non-tattoo image patches, group the patches into blobs, and propose the de-identifying method based on replacing the color of pixels inside the tattoo blob area with a values obtained by interpolation of the surrounding skin color. Experimental evaluation on the contributed dataset indicates the proposed method can be useful in a soft biometric de-identification scenario.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122664819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Convolutional Neural Network for 3D object recognition using volumetric representation 基于体积表示的三维物体识别卷积神经网络
Xiaofang Xu, A. Dehghani, D. Corrigan, Sam Caulfield, D. Moloney
{"title":"Convolutional Neural Network for 3D object recognition using volumetric representation","authors":"Xiaofang Xu, A. Dehghani, D. Corrigan, Sam Caulfield, D. Moloney","doi":"10.1109/SPLIM.2016.7528403","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528403","url":null,"abstract":"Following the success of Convolutional Neural Networks (CNNs) on object recognition using 2D images, they are extended in this paper to process 3D data. Nearly most of current systems require huge amount of computation for dealing with large amount of data. In this paper, an efficient 3D volumetric object representation, Volumetric Accelerator (VOLA), is presented which requires much less memory than the normal volumetric representations. On this basis, a few 3D digit datasets using 2D MNIST and 2D digit fonts with different rotations along the x, y, and z axis are introduced. Finally, we introduce a combination of multiple CNN models based on the famous LeNet model. The trained CNN models based on the generated dataset have achieved the average accuracy of 90.30% and 81.85% for 3D-MNIST and 3D-Fonts datasets, respectively. Experimental results show that VOLA-based CNNs perform 1.5x faster than the original LeNet.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126441629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Delay reduction in real-time recognition of human activity for stroke rehabilitation 减少脑卒中康复中人类活动实时识别的延迟
R. Nabiei, M. Najafian, M. Parekh, P. Jančovič, M. Russell
{"title":"Delay reduction in real-time recognition of human activity for stroke rehabilitation","authors":"R. Nabiei, M. Najafian, M. Parekh, P. Jančovič, M. Russell","doi":"10.1109/SPLIM.2016.7528413","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528413","url":null,"abstract":"Assisting patients to perform activity of daily living (ADLs) is a challenging task for both human and machine. Hence, developing a computer-based rehabilitation system to re-train patients to carry out daily activities is an essential step towards facilitating rehabilitation of stroke patients with apraxia and action disorganization syndrome (AADS). This paper presents a real-time hidden Markov model (HMM) based human activity recognizer, and proposes a technique to reduce the time-delay occurred during the decoding stage. Results are reported for complete tea-making trials. In this study, the input features are recorded using sensors attached to the objects involved in the tea-making task, plus hand coordinate data captured using KinectTM sensor. A coaster of sensors, comprising an accelerometer and three force-sensitive resistors, are packaged in a unit which can be easily attached to the base of an object. A parallel asynchronous set of detectors, each responsible for the detection of one sub-goal in the tea-making task, are used to address challenges arising from overlaps between human actions. The proposed activity recognition system with the modified HMM topology provides a practical solution to the action recognition problem and reduces the time-delay by 64% with no loss in accuracy.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125447746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Robust community detection on dynamic graph 动态图的鲁棒社团检测
Dan Wu, K. Niu, Zhiqiang He
{"title":"Robust community detection on dynamic graph","authors":"Dan Wu, K. Niu, Zhiqiang He","doi":"10.1109/SPLIM.2016.7528393","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528393","url":null,"abstract":"Many approaches have been proposed to identify communities on complex networks. However the current algorithms are sensitive to the variation of input data and parameters. In this paper, we propose a new community detection approach called robust community detection on dynamic network (RCD). The robustness of our algorithm lies in two aspects. Firstly, by adopting the offset of sigmoid function, RCD reduces dependency on the input cluster number. Therefore, RCD is insensitive to the man-made interference and the robustness is guaranteed. Secondly, RCD is not restricted to the type of input networks, because it only depends on the topological structure of network rather than requiring labels or other information of networks. Thus, the application robustness is ensured. RCD are evaluated on both the synthetic and realistic network data. The experiment result shows that by introducing sigmoid function, the error rate of misclassification and iterative times are decreased.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121816619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater acoustic sensor fault detection for passive sonar systems 被动声呐系统的水声传感器故障检测
Yong Guk Kim, YoungShin Kim, S. H. Lee, Sang-Taeck Moon, M. Jeon, H. Kim
{"title":"Underwater acoustic sensor fault detection for passive sonar systems","authors":"Yong Guk Kim, YoungShin Kim, S. H. Lee, Sang-Taeck Moon, M. Jeon, H. Kim","doi":"10.1109/SPLIM.2016.7528395","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528395","url":null,"abstract":"In this paper, an underwater acoustic sensor fault detection method is proposed that determines whether or not each sensor of multi-channel line array hydrophones malfunctions for passive sonar systems. To this end, the proposed method first measures a short-time root mean square (RMS) value of input signal for each channel. Then, it analyzes the RMS difference between the adjacent channels. In addition, the crossing rate of RMS values (RMSCR) is computed for each channel, and then the average value of RMSCR over all the channels is obtained. Some faulty sensors are identified by comparing the RMS difference with a threshold, and others by comparing the ratio between RMSCR of each of them and the average value of RMSCR with a threshold. In order to evaluate the performance of the proposed method, the precision of detecting fault sensors is measured. Consequently, it is shown that the proposed method works well in underwater environments with average RMS of -18.6 and -9.7 dB.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125572046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Visually evoked potentials for EEG biometric recognition 脑电生物特征识别的视觉诱发电位
Rig Das, Emanuela Piciucco, E. Maiorana, P. Campisi
{"title":"Visually evoked potentials for EEG biometric recognition","authors":"Rig Das, Emanuela Piciucco, E. Maiorana, P. Campisi","doi":"10.1109/SPLIM.2016.7528407","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528407","url":null,"abstract":"Electroencephalographs (EEG) signals elicited by means of visual stimuli are highly time-dependent as they vary due to the subject's attention, state of mind, position of electrodes, etc., during acquisition. In this paper we exploit the use of techniques tailored to the analysis of signals varying across time. Specifically, dynamic time warping (DTW) is a technique to find an optimal alignment between two time-dependent series as it successfully copes with the time deformations and different speeds that are associated with time-dependent data, whereas symbolic aggregate approximation (SAX) produces a symbolic representation for a time series and can be used to represent highly time-dependent data in time invariant manner. In this paper we investigate visually evoked potential (VEP)-based EEG signals using DTW and SAX method, in order to analyze the permanence issue of EEG signals by verifying its stability across time.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127537670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Two-stage cascade model for unconstrained face detection 无约束人脸检测的两阶段级联模型
Darijan Marcetic, T. Hrkać, S. Ribaric
{"title":"Two-stage cascade model for unconstrained face detection","authors":"Darijan Marcetic, T. Hrkać, S. Ribaric","doi":"10.1109/SPLIM.2016.7528404","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528404","url":null,"abstract":"In this paper, we propose a two-stage model for unconstrained face detection. The first stage is based on the normalized pixel difference (NPD) method, and the second stage uses the deformable part model (DPM) method. The NPD method applied to in the wild image datasets outputs the unbalanced ratio of false positive to false negative face detection when the main goal is to achieve minimal false negative face detection. In this case, false positive face detection is typically an order of magnitude higher. The result of the NPD-based detector is forwarded to the DPM-based detector in order to reduce the number of false positive detections. In this paper, we compare the results obtained by the NPD and DPM methods on the one hand, and the proposed two-stage model on the other. The preliminary experimental results on the Annotated Faces in the Wild (AFW) and the Face Detection Dataset and Benchmark (FDDB) show that the two-stage model significantly reduces false positive detections while simultaneously the number of false negative detections is increased by only a few.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126785497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Projecting emotional speech into arousal-valence space using pairwise preference learning 利用两两偏好学习将情绪言语投射到唤醒效价空间
Mohamed Abou-Zleikha, M. G. Christensen, Z. Tan, S. H. Jensen
{"title":"Projecting emotional speech into arousal-valence space using pairwise preference learning","authors":"Mohamed Abou-Zleikha, M. G. Christensen, Z. Tan, S. H. Jensen","doi":"10.1109/SPLIM.2016.7528401","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528401","url":null,"abstract":"Emotion recognition in speech is a very challenging task in the speech processing domain. Because of the continuity characteristics of human emotion, most of the recent research focuses on recognising emotion in a continuous space. While previous attempts for speech emotion annotation adopted the likert-like scaling system in a continuous space and relied on prediction models to predict emotion we, in this research, propose a new method for data labelling based on a pairwise data annotation. A set of constraints was proposed to decrease the number of pairs required to label. The annotated data is used to construct a regression model using the pairwise evolutionary multivariate adaptive regression spline method. The experiments performed show high recognition accuracies compared to the baseline random pairwise assignment.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115015160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards biometrie features for 2D/3D sensoring of forensic handwriting 法医笔迹二维/三维传感的生物特征研究
T. Scheidat, Michael Kalbitz, C. Vielhauer
{"title":"Towards biometrie features for 2D/3D sensoring of forensic handwriting","authors":"T. Scheidat, Michael Kalbitz, C. Vielhauer","doi":"10.1109/SPLIM.2016.7528410","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528410","url":null,"abstract":"User authentication based on biometric information is an important part of computer security. A biometric system utilizes physiological or behavioural characteristics of a human being in order to confirm or to determine its identity. Handwriting as biometric modality can be based on the study of the writing result as well as on the observation of the writing process. While conventional sensors for biometric handwriting acquisition are for example flatbed scanners, graphics tablets or special signature tablets, recently the application of high-resolution, contact-less 2D/3D sensors for forensic handwriting analysis has been suggested. Since biometric and forensic handwriting focus at identification of writers, we carry out an initial study in this paper on feasibility of writing pressure based features of a contact-less 2D/3D optical sensor, providing high resolution images of intensity and topography in the nanometer range. The concept is based on dual-sensor acquisition (biometric and forensic) of writing traces, the adaption of biometric pressure based features towards 3D impressioning in the forensic data and a comparative experimental evaluation of both in terms of biometric verification performance. Therefore, in the first step handwriting probes of five persons are prepared on paper and simultaneously acquired by a biometric signature tablet. In the second step each probe is scanned by the 2D/3D optical sensor to digitize it into a topographical representation. This information is used to carry out a biometric authentication and its results are compared to those based on the signature tablet data. Our first studies show similar verification results for both.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125443352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User intention understanding from scratch 从头开始理解用户意图
Ziheng Wang, Yonggang Qi, Jun Liu, Zhanyu Ma
{"title":"User intention understanding from scratch","authors":"Ziheng Wang, Yonggang Qi, Jun Liu, Zhanyu Ma","doi":"10.1109/SPLIM.2016.7528398","DOIUrl":"https://doi.org/10.1109/SPLIM.2016.7528398","url":null,"abstract":"User intention understanding from text is an important task in NLP. In this paper, we study the problem of phone-changing intention prediction. And we propose a novel feature extraction method, which selects the most representative intention feature, to represent user's intention from text scratch. Then we adopt a supervised learning approach, that is to train SVM classifier, for intention prediction. In addition, we propose a novel phone-changing intention dataset that the text scratches and their corresponding labels are collected from real network environment. The experimental results validate the effectiveness of our proposed approach.","PeriodicalId":297318,"journal":{"name":"2016 First International Workshop on Sensing, Processing and Learning for Intelligent Machines (SPLINE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115426042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信