Ryan McCoppin, M. Rizki, L. Tamburino, A. Freeman, O. Mendoza-Schrock
{"title":"The effects of clothing on gender classification using LIDAR data","authors":"Ryan McCoppin, M. Rizki, L. Tamburino, A. Freeman, O. Mendoza-Schrock","doi":"10.1109/NAECON.2012.6531043","DOIUrl":null,"url":null,"abstract":"In this paper we describe preliminary efforts to extend previous gender classification experiments using feature histograms extracted from 3D point clouds of human subjects. The previous experiments used point clouds drawn from the Civilian American and European Surface Anthropometry Project (CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International). This database contains approximately 4,400 high-resolution LIDAR whole body scans of carefully posed human subjects. Features are extracted from each point cloud by embedding the cloud in series of cylindrical shapes and computing a point count for each cylinder that characterizes a region of the subject. These measurements define rotationally invariant histogram features that are processed by a classifier to label the gender of each subject. The recognition results with the tightly control CAESAR database reached levels of over 90% accuracy. A smaller secondary point cloud data set was generated at Wright State University to allow experimentation on clothed subjects that was not possible with the CAESAR data. We present the preliminary results for the transition of classification software using different combinations of training and tests sets taken from both the CAESAR and clothed subject data sets. As expected, the accuracy achieved with clothed subjects fell short of the earlier experiments using only the CAESAR data. Nevertheless, the new results provide new insights for more robust classification algorithms.","PeriodicalId":352567,"journal":{"name":"2012 IEEE National Aerospace and Electronics Conference (NAECON)","volume":"347 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE National Aerospace and Electronics Conference (NAECON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NAECON.2012.6531043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper we describe preliminary efforts to extend previous gender classification experiments using feature histograms extracted from 3D point clouds of human subjects. The previous experiments used point clouds drawn from the Civilian American and European Surface Anthropometry Project (CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International). This database contains approximately 4,400 high-resolution LIDAR whole body scans of carefully posed human subjects. Features are extracted from each point cloud by embedding the cloud in series of cylindrical shapes and computing a point count for each cylinder that characterizes a region of the subject. These measurements define rotationally invariant histogram features that are processed by a classifier to label the gender of each subject. The recognition results with the tightly control CAESAR database reached levels of over 90% accuracy. A smaller secondary point cloud data set was generated at Wright State University to allow experimentation on clothed subjects that was not possible with the CAESAR data. We present the preliminary results for the transition of classification software using different combinations of training and tests sets taken from both the CAESAR and clothed subject data sets. As expected, the accuracy achieved with clothed subjects fell short of the earlier experiments using only the CAESAR data. Nevertheless, the new results provide new insights for more robust classification algorithms.