Seigo Ito, S. Hiratsuka, M. Ohta, H. Matsubara, Masaru Ogawa
{"title":"SPAD DCNN:利用小成像激光雷达和DCNN进行定位","authors":"Seigo Ito, S. Hiratsuka, M. Ohta, H. Matsubara, Masaru Ogawa","doi":"10.1109/IROS.2017.8206167","DOIUrl":null,"url":null,"abstract":"Small 3D LIDAR and a multimodal-based localization are fundamentally important for autonomous robots. This paper describes presentation and demonstration of a sensor and a method for LIDAR-image based localization. Our small LIDAR, named SPAD LIDAR, uses a single-photon avalanche diode (SPAD). The SPAD LIDAR incorporates laser receiver and environmental light receiver in a single chip. Therefore, the sensor simultaneously outputs range data and monocular image data. By virtue of this structure, the sensor requires no external calibration between range data and monocular image data. Based on this sensor, we introduce a localization method using a deep convolutional neural network (SPAD DCNN), which fuses SPAD LIDAR outputs: range data, monocular image data, and peak intensity data. Our method regresses LIDAR's position in an environment. We also introduce improved SPAD DCNN, designated as Fast SPAD DCNN. To reduce the computational demands of SPAD DCNN, Fast SPAD DCNN integrates range data and peak intensity data. The integrated data reduces runtime without greatly increasing localization error compared to the conventional method. We evaluate our SPAD DCNN and Fast SPAD DCNN localization method in indoor environments and compare its performance. Results show that SPAD DCNN and Fast SPAD DCNN improve localization in terms of accuracy and runtime.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"67 1","pages":"3312-3317"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"SPAD DCNN: Localization with small imaging LIDAR and DCNN\",\"authors\":\"Seigo Ito, S. Hiratsuka, M. Ohta, H. Matsubara, Masaru Ogawa\",\"doi\":\"10.1109/IROS.2017.8206167\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Small 3D LIDAR and a multimodal-based localization are fundamentally important for autonomous robots. This paper describes presentation and demonstration of a sensor and a method for LIDAR-image based localization. Our small LIDAR, named SPAD LIDAR, uses a single-photon avalanche diode (SPAD). The SPAD LIDAR incorporates laser receiver and environmental light receiver in a single chip. Therefore, the sensor simultaneously outputs range data and monocular image data. By virtue of this structure, the sensor requires no external calibration between range data and monocular image data. Based on this sensor, we introduce a localization method using a deep convolutional neural network (SPAD DCNN), which fuses SPAD LIDAR outputs: range data, monocular image data, and peak intensity data. Our method regresses LIDAR's position in an environment. We also introduce improved SPAD DCNN, designated as Fast SPAD DCNN. To reduce the computational demands of SPAD DCNN, Fast SPAD DCNN integrates range data and peak intensity data. The integrated data reduces runtime without greatly increasing localization error compared to the conventional method. We evaluate our SPAD DCNN and Fast SPAD DCNN localization method in indoor environments and compare its performance. Results show that SPAD DCNN and Fast SPAD DCNN improve localization in terms of accuracy and runtime.\",\"PeriodicalId\":6658,\"journal\":{\"name\":\"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"volume\":\"67 1\",\"pages\":\"3312-3317\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.2017.8206167\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2017.8206167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SPAD DCNN: Localization with small imaging LIDAR and DCNN
Small 3D LIDAR and a multimodal-based localization are fundamentally important for autonomous robots. This paper describes presentation and demonstration of a sensor and a method for LIDAR-image based localization. Our small LIDAR, named SPAD LIDAR, uses a single-photon avalanche diode (SPAD). The SPAD LIDAR incorporates laser receiver and environmental light receiver in a single chip. Therefore, the sensor simultaneously outputs range data and monocular image data. By virtue of this structure, the sensor requires no external calibration between range data and monocular image data. Based on this sensor, we introduce a localization method using a deep convolutional neural network (SPAD DCNN), which fuses SPAD LIDAR outputs: range data, monocular image data, and peak intensity data. Our method regresses LIDAR's position in an environment. We also introduce improved SPAD DCNN, designated as Fast SPAD DCNN. To reduce the computational demands of SPAD DCNN, Fast SPAD DCNN integrates range data and peak intensity data. The integrated data reduces runtime without greatly increasing localization error compared to the conventional method. We evaluate our SPAD DCNN and Fast SPAD DCNN localization method in indoor environments and compare its performance. Results show that SPAD DCNN and Fast SPAD DCNN improve localization in terms of accuracy and runtime.