Lu Ren, Jianwei Niu, Zhenchao Ouyang, Zhibin Zhang, Siyi Zheng
{"title":"TCFNet: Transformer and CNN Fusion Model for LiDAR Point Cloud Semantic Segmentation","authors":"Lu Ren, Jianwei Niu, Zhenchao Ouyang, Zhibin Zhang, Siyi Zheng","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00197","DOIUrl":null,"url":null,"abstract":"Dynamic scene understanding based on LiDAR point clouds is one of the critical perception tasks for self-driving vehicles. Among these tasks, point cloud semantic segmentation is highly challenging. Some existing work ignores the loss of crucial information caused by sampling and projecting. Others use modules with high computational complexity because of the pursuit of precision, challenging to deploy in the vehicle platform with limited computing power. This paper proposes Fusedown/Fuse-up modules for efficient down-sampling/up-sampling feature extraction. The modules combine the transformer in vision integrating the global information of the feature map with the CNN extracting local feature information. Based on these two modules, we built the transformer and CNN fusion network called TCFNet for point cloud semantic segmentation. Experiments on the SemanticKITTI show that our suitable combination of transformer and CNN is necessary for semantic segmentation accuracy, and the mIoU of our model can reach 82.7% at 10 FPS. The code can be accessed at https://github.com/donkeyofking/TCFNet.git.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"63 1","pages":"1366-1372"},"PeriodicalIF":0.9000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scalable Computing-Practice and Experience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Dynamic scene understanding based on LiDAR point clouds is one of the critical perception tasks for self-driving vehicles. Among these tasks, point cloud semantic segmentation is highly challenging. Some existing work ignores the loss of crucial information caused by sampling and projecting. Others use modules with high computational complexity because of the pursuit of precision, challenging to deploy in the vehicle platform with limited computing power. This paper proposes Fusedown/Fuse-up modules for efficient down-sampling/up-sampling feature extraction. The modules combine the transformer in vision integrating the global information of the feature map with the CNN extracting local feature information. Based on these two modules, we built the transformer and CNN fusion network called TCFNet for point cloud semantic segmentation. Experiments on the SemanticKITTI show that our suitable combination of transformer and CNN is necessary for semantic segmentation accuracy, and the mIoU of our model can reach 82.7% at 10 FPS. The code can be accessed at https://github.com/donkeyofking/TCFNet.git.
期刊介绍:
The area of scalable computing has matured and reached a point where new issues and trends require a professional forum. SCPE will provide this avenue by publishing original refereed papers that address the present as well as the future of parallel and distributed computing. The journal will focus on algorithm development, implementation and execution on real-world parallel architectures, and application of parallel and distributed computing to the solution of real-life problems.