2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)最新文献

筛选
英文 中文
Ear recognition after ear lobe surgery: A preliminary study 耳垂手术后的耳识别:初步研究
Ramachandra Raghavendra, K. Raja, C. Busch
{"title":"Ear recognition after ear lobe surgery: A preliminary study","authors":"Ramachandra Raghavendra, K. Raja, C. Busch","doi":"10.1109/ISBA.2016.7477249","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477249","url":null,"abstract":"Earlobe surgeries are performed with the intention to correct the ear characteristics both locally and globally and also to beautify the appearance. Since performing the surgery (both for beautification and corrections) will alter the original ear features to the greater extent thereby poses a significant challenge for ear recognition. In this work, we introduce and explore this problem of ear recognition after ear lobe surgery. To this extent, we prepared a new ear surgery database comprising of 50 subjects with both pre and post surgery ear samples. We then propose a new scheme for ear recognition based on the hybrid fusion of block features extracted from the ear images using Histogram of Oriented Gradients (HoG) and Local Phase Quantisation (LPQ). We present extensive experiments on the ear surgery database by comparing the performance of eight different state-of-the-art schemes to study the effect of ear surgeries on ear recognition accuracy. The results on the ear surgery database indicate a great challenge as the eight different state-of-the-art schemes are unable to provide acceptable levels of identification performance.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129841530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Spotting fingerspelled words from sign language video by temporally regularized canonical component analysis 基于时间正则化正则分量分析的手势语视频手势语识别方法
Shohei Tanaka, A. Okazaki, N. Kato, H. Hino, K. Fukui
{"title":"Spotting fingerspelled words from sign language video by temporally regularized canonical component analysis","authors":"Shohei Tanaka, A. Okazaki, N. Kato, H. Hino, K. Fukui","doi":"10.1109/ISBA.2016.7477238","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477238","url":null,"abstract":"A method for spotting specific words in sign language video is proposed. In classes and talks given using Japanese Sign Language, words that do not have a defined sign, such as the names of people, objects, and places, are represented by sets of multiple characters from the Japanese finger alphabet. The difficulty of recognizing these words has created strong demand for the ability to spot specific words in order to help interpreters and the audience to follow a talk. We address the spotting task by employing the basic idea of temporal regularized canonical correlation analysis (TRCCA), which can simultaneously handle shape and motion information about a 3D object. The classification accuracy of TRCCA is enhanced by incorporating two functions: 1) parallel processing with multiple time scales, 2) strong implicit feature mapping by nonlinear orthogonalization. The enhanced TRCCA is called \"kernel orthogonal TRCCA (KOTRCCA)\". The effectiveness of the proposed method using KOTRCCA is demonstrated through experiments spotting eight different words in sign language videos.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122470982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Determining the best attributes for surveillance video keywords generation 确定监控视频关键字生成的最佳属性
Liangchen Liu, A. Wiliem, Shaokang Chen, Kun Zhao, B. Lovell
{"title":"Determining the best attributes for surveillance video keywords generation","authors":"Liangchen Liu, A. Wiliem, Shaokang Chen, Kun Zhao, B. Lovell","doi":"10.1109/ISBA.2016.7477239","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477239","url":null,"abstract":"Automatic video keyword generation is one of the key ingredients in reducing the burden of security officers in analyzing surveillance videos. Keywords or attributes are generally chosen manually based on expert knowledge of surveillance. Most existing works primarily aim at either supervised learning approaches relying on extensive manual labelling or hierarchical probabilistic models that assume the features are extracted using the bag-of-words approach; thus limiting the utilization of the other features. To address this, we turn our attention to automatic attribute discovery approaches. However, it is not clear which automatic discovery approach can discover the most meaningful attributes. Furthermore, little research has been done on how to compare and choose the best automatic attribute discovery methods. In this paper, we propose a novel approach, based on the shared structure exhibited amongst meaningful attributes, that enables us to compare between different automatic attribute discovery approaches. We then validate our approach by comparing various attribute discovery methods such as PiCoDeS on two attribute datasets. The evaluation shows that our approach is able to select the automatic discovery approach that discovers the most meaningful attributes. We then employ the best discovery approach to generate keywords for videos recorded from a surveillance system. This work shows it is possible to massively reduce the amount of manual work in generating video keywords without limiting ourselves to a particular video feature descriptor.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Soft biometric retrieval to describe and identify surveillance images 描述和识别监控图像的软生物特征检索
Daniel Martinho-Corbishley, M. Nixon, J. Carter
{"title":"Soft biometric retrieval to describe and identify surveillance images","authors":"Daniel Martinho-Corbishley, M. Nixon, J. Carter","doi":"10.1109/ISBA.2016.7477240","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477240","url":null,"abstract":"Soft biometrics are human describable, distinguishing human characteristics. We present a baseline solution to the problem of identifying individuals solely from human descriptions, by automatically retrieving soft biometric labels from images. Probe images are then identified from a gallery of known soft biometric signatures, using their predicted labels. We investigate four labelling techniques and a number of challenging re-identification scenarios with this method. We also present a novel dataset, SoBiR, consisting of 8 camera viewpoints, 100 subjects and 4 forms of comprehensive human annotation to facilitate soft biometric retrieval. We report the increased retrieval accuracy of binary labels, the generalising capability of continuous measurements and the overall performance improvement of comparative annotations over categorical annotations.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129761152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
On motion-sensor behavior analysis for human-activity recognition via smartphones 基于智能手机的人体活动识别的运动传感器行为分析
Chao Shen, Yufei Chen, Gengshan Yang
{"title":"On motion-sensor behavior analysis for human-activity recognition via smartphones","authors":"Chao Shen, Yufei Chen, Gengshan Yang","doi":"10.1109/ISBA.2016.7477231","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477231","url":null,"abstract":"A wealth of sensors on smartphones has greatly facilitated people's life, which may also provide great potential for accurate human activity recognition. This paper presents an empirical study of analyzing the behavioral characteristics of smartphone inertial sensors for human activity recognition. The rationale behind is that different human activities would cause different levels of posture and motion change of smartphone. In this work, an Android application was run as a background job to monitor data of motion sensors. Sensory data from motion sensors (mainly including accelerometer and gyroscope data) were analyzed to extracted time-, frequency-, and wavelet-domain features for accurate and fine-grained characterization of human activities. Classification technique were applied to build both personalized model and generalized model for discriminating five daily human activities: going downstairs, going upstairs, walking, running, and jumping. Analyses conducted on 18 subjects showed that these human activities can be accurately recognized from smartphone-sensor behavior, with recognition rates expressed by the area under the ROC curve ranging from 84.97% to 90.65%. We also discuss a number of avenues for additional research to advance the state of the art in this area.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128685999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Sitting posture diagnosis using a pressure sensor mat 使用压力传感垫进行坐姿诊断
Shunsuke Suzuki, Mineichi Kudo, Atsuyoshi Nakamura
{"title":"Sitting posture diagnosis using a pressure sensor mat","authors":"Shunsuke Suzuki, Mineichi Kudo, Atsuyoshi Nakamura","doi":"10.1109/ISBA.2016.7477236","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477236","url":null,"abstract":"It is well known that taking wrong sitting posture all day long is harmful for health. However, quantifying the degree of collapse of posture is not so easy. Typically, checking the video of sitting state or examining the spinal curves in radiograph is made so far, but it needs a clinical specialist for diagnosis. In this paper, for attaining this goal more easily, we give a device usable in daily life. We measure a time series of the pressure distributions obtained from a mat with 16×16 pressure sensors put on a chair. More concretely we analyze the sitting person's move direction and the amount, and make a daily diagnosis report aiming at helping his/her own notice. Through the experiments using two subjects who spent around three hours at the chair, a possibility of automatic and daily-basis diagnosis of sitting posture is demonstrated.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127659236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Recognizing human faces under disguise and makeup 识别伪装和化妆后的人脸
Tsung Ying Wang, Ajay Kumar
{"title":"Recognizing human faces under disguise and makeup","authors":"Tsung Ying Wang, Ajay Kumar","doi":"10.1109/ISBA.2016.7477243","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477243","url":null,"abstract":"The accuracy of automated human face recognition algorithms can significantly degrade while recognizing same subjects under make-up and disguised appearances. Increasing constraints on enhanced security and surveillance requires enhanced accuracy from face recognition algorithms for faces under disguise and/or makeup. This paper presents a new database for face images under disguised and make-up appearances the development of face recognition algorithms under such covariates. This database has 2460 images from 410 different subjects and is acquired under real environment, focuses on make-up and disguises covariates and also provides ground truth (eye glass, goggle, mustache, beard) for every image. This can enable developed algorithms to automatically quantify their capability for identifying such important disguise attribute during the face recognition We also present comparative experimental results from two popular commercial matchers and from recent publications. Our experimental results suggest significant performance degradation in the capability of these matchers in automatically recognizing these faces. We also analyze face detection accuracy from these matchers. The experimental results underline the challenges in recognizing faces under these covariates. Availability of this new database in public domain will help to advance much needed research and development in recognizing make-up and disguised faces.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133534859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Deep feature-based face detection on mobile devices 移动设备上基于深度特征的人脸检测
Sayantan Sarkar, Vishal M. Patel, R. Chellappa
{"title":"Deep feature-based face detection on mobile devices","authors":"Sayantan Sarkar, Vishal M. Patel, R. Chellappa","doi":"10.1109/ISBA.2016.7477230","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477230","url":null,"abstract":"We propose a deep feature-based face detector for mobile devices to detect user's face acquired by the front-facing camera. The proposed method is able to detect faces in images containing extreme pose and illumination variations as well as partial faces. The main challenge in developing deep feature-based algorithms for mobile devices is the constrained nature of the mobile platform and the non-availability of CUDA enabled GPUs on such devices. Our implementation takes into account the special nature of the images captured by the front-facing camera of mobile devices and exploits the GPUs present in mobile devices without CUDA-based frameworks, to meet these challenges.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Face alignment via an ensemble of random ferns 通过随机蕨类植物的集合进行面部对齐
Xiang Xu, S. Shah, I. Kakadiaris
{"title":"Face alignment via an ensemble of random ferns","authors":"Xiang Xu, S. Shah, I. Kakadiaris","doi":"10.1109/ISBA.2016.7477237","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477237","url":null,"abstract":"This paper proposes a simple but efficient shape regression method for face alignment using an ensemble of random ferns. First, a classification method is used to obtain several mean shapes for initialization. Second, an ensemble of local random ferns is learned based on the correlation between the projected regression targets and local pixel-difference matrix for each landmark. Third, the ensemble of random ferns is used to generate local binary features. Finally, the global projection matrix is learned based on concatenated binary features using ridge regression. The results demonstrate that the proposed method is efficient and accurate when compared with the state-of-the-art face alignment methods and achieve the best performance on LFPW and Helen datasets.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"54 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114032216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Rendering or normalization? An analysis of the 3D-aided pose-invariant face recognition 呈现还是规范化?三维辅助姿态不变人脸识别分析
Yuhang Wu, S. Shah, I. Kakadiaris
{"title":"Rendering or normalization? An analysis of the 3D-aided pose-invariant face recognition","authors":"Yuhang Wu, S. Shah, I. Kakadiaris","doi":"10.1109/ISBA.2016.7477244","DOIUrl":"https://doi.org/10.1109/ISBA.2016.7477244","url":null,"abstract":"In spite of recent progress achieved in near-frontal face recognition, the problem of pose variations prevalent in 2D facial images captured in the wild still remains a challenging and unsolved issue. Among existing approaches of pose-invariant face recognition, 3D-aided methods have been demonstrated effective and promising. In this paper, we present an extensive evaluation of two widely adopted frameworks of 3D-aided face recognition in order to compare the state-of-the-art, identify remaining issues, and offer suggestions for future research. Specifically, we compare the pose normalization and the pose synthesis (rendering) based methods in an empirical manner. The database (UHDB31) that we use covers 21 well-controlled pose variations, half of which show a combination of yaw and pitch. Through the experiments, we present the advantages and disadvantages of these two methods to provide solid data for future research in 3D-aided pose-invariant face recognition.","PeriodicalId":198009,"journal":{"name":"2016 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121537735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信