{"title":"基于多特征和降维方案的激光雷达行人检测","authors":"Sin-Ye Jhong, Yu-Quan Wang, Wei Cheng, Hao-Wei Hwang, Yung-Yao Chen","doi":"10.1109/ICSSE55923.2022.9948235","DOIUrl":null,"url":null,"abstract":"In recent years, the development of LiDAR received plenty of attention in autopilot filed because of its long-distant detection and 360-degree vision which make self-driving safer under harsh situation. However, because point cloud is sparse and has inhomogeneous representation, it may lead to error detection, especially when detecting distant or small objects (e.g., pedestrians, traffic signs). In order to mitigate lack of feature information which is triggered by above issues, some researchers proposed multi-views or multi sensor fusion methods to increase dimensionality of feature to complement missing information, but it would result in an unnecessary computation because of feature redundancy and imbalance. In this paper, we proposed a scheme of dimensionality reduction based on variational autoencoder (VAE). Through its encoder, the high-dimensional feature is refined and mapped into a meaningful low-dimensional feature space that still retains representative features. We designed a pedestrian detection framework using our method and from the experimental results, our method achieved better performance compared to previous work.","PeriodicalId":220599,"journal":{"name":"2022 International Conference on System Science and Engineering (ICSSE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LiDAR-Based Pedestrian Detection Using Multiple Features and Dimensionality Reduction Scheme\",\"authors\":\"Sin-Ye Jhong, Yu-Quan Wang, Wei Cheng, Hao-Wei Hwang, Yung-Yao Chen\",\"doi\":\"10.1109/ICSSE55923.2022.9948235\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, the development of LiDAR received plenty of attention in autopilot filed because of its long-distant detection and 360-degree vision which make self-driving safer under harsh situation. However, because point cloud is sparse and has inhomogeneous representation, it may lead to error detection, especially when detecting distant or small objects (e.g., pedestrians, traffic signs). In order to mitigate lack of feature information which is triggered by above issues, some researchers proposed multi-views or multi sensor fusion methods to increase dimensionality of feature to complement missing information, but it would result in an unnecessary computation because of feature redundancy and imbalance. In this paper, we proposed a scheme of dimensionality reduction based on variational autoencoder (VAE). Through its encoder, the high-dimensional feature is refined and mapped into a meaningful low-dimensional feature space that still retains representative features. We designed a pedestrian detection framework using our method and from the experimental results, our method achieved better performance compared to previous work.\",\"PeriodicalId\":220599,\"journal\":{\"name\":\"2022 International Conference on System Science and Engineering (ICSSE)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on System Science and Engineering (ICSSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSSE55923.2022.9948235\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on System Science and Engineering (ICSSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSSE55923.2022.9948235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LiDAR-Based Pedestrian Detection Using Multiple Features and Dimensionality Reduction Scheme
In recent years, the development of LiDAR received plenty of attention in autopilot filed because of its long-distant detection and 360-degree vision which make self-driving safer under harsh situation. However, because point cloud is sparse and has inhomogeneous representation, it may lead to error detection, especially when detecting distant or small objects (e.g., pedestrians, traffic signs). In order to mitigate lack of feature information which is triggered by above issues, some researchers proposed multi-views or multi sensor fusion methods to increase dimensionality of feature to complement missing information, but it would result in an unnecessary computation because of feature redundancy and imbalance. In this paper, we proposed a scheme of dimensionality reduction based on variational autoencoder (VAE). Through its encoder, the high-dimensional feature is refined and mapped into a meaningful low-dimensional feature space that still retains representative features. We designed a pedestrian detection framework using our method and from the experimental results, our method achieved better performance compared to previous work.