Xinyu Zhang;Xiangchen Yin;Xin Gao;Tianheng Qiu;Li Wang;Guizhen Yu;Yunpeng Wang;Guoying Zhang;Jun Li
{"title":"Adaptive Entropy Multi-Modal Fusion for Nighttime Lane Segmentation","authors":"Xinyu Zhang;Xiangchen Yin;Xin Gao;Tianheng Qiu;Li Wang;Guizhen Yu;Yunpeng Wang;Guoying Zhang;Jun Li","doi":"10.1109/TIV.2024.3392413","DOIUrl":null,"url":null,"abstract":"Lane segmentation at night is a challenging problem in autonomous driving perception, which is beneficial to improve the robustness of the application. Existing methods has shown great performance in the benchmark dataset, however, they do not consider the bad lighting scenes in practical applications, for example, the performance of the lane segmentation algorithm will be greatly affected at night. In this paper, we propose a novel multi-modal nighttime lane segmentation algorithm, which utilizes camera and LiDAR for complementary information. We illustrate the role of image entropy in showing the distribution of light at night, and propose an adaptive entropy fusion method to obtain the spatial relationship between entropy and modalities to adapt to different lighting scenes. The features of narrow and long lanes are more likely to be lost at night, a lane feature enhancement module is proposed to enhance the network's ability to capture lane features. Extensive experiments and analysis demostrate the effectiveness of our method against the state-of-the-art semantic segmentation and lane segmentation approaches on SHIFT dataset at night. Extensive experiments conducted on SHIFT dataset at night demonstrate that the proposed method achieves the state-of-the-art performance, 88.36%@14.06fps and 87.24%@26.88fps on SHIFT dataset at night, having the capability for real-time applications.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"6990-7002"},"PeriodicalIF":14.0000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Vehicles","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10506546/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Lane segmentation at night is a challenging problem in autonomous driving perception, which is beneficial to improve the robustness of the application. Existing methods has shown great performance in the benchmark dataset, however, they do not consider the bad lighting scenes in practical applications, for example, the performance of the lane segmentation algorithm will be greatly affected at night. In this paper, we propose a novel multi-modal nighttime lane segmentation algorithm, which utilizes camera and LiDAR for complementary information. We illustrate the role of image entropy in showing the distribution of light at night, and propose an adaptive entropy fusion method to obtain the spatial relationship between entropy and modalities to adapt to different lighting scenes. The features of narrow and long lanes are more likely to be lost at night, a lane feature enhancement module is proposed to enhance the network's ability to capture lane features. Extensive experiments and analysis demostrate the effectiveness of our method against the state-of-the-art semantic segmentation and lane segmentation approaches on SHIFT dataset at night. Extensive experiments conducted on SHIFT dataset at night demonstrate that the proposed method achieves the state-of-the-art performance, 88.36%@14.06fps and 87.24%@26.88fps on SHIFT dataset at night, having the capability for real-time applications.
期刊介绍:
The IEEE Transactions on Intelligent Vehicles (T-IV) is a premier platform for publishing peer-reviewed articles that present innovative research concepts, application results, significant theoretical findings, and application case studies in the field of intelligent vehicles. With a particular emphasis on automated vehicles within roadway environments, T-IV aims to raise awareness of pressing research and application challenges.
Our focus is on providing critical information to the intelligent vehicle community, serving as a dissemination vehicle for IEEE ITS Society members and others interested in learning about the state-of-the-art developments and progress in research and applications related to intelligent vehicles. Join us in advancing knowledge and innovation in this dynamic field.