crosssray3d:几何和分布指导有效的多模态3D检测

IF 8.4 1区 工程技术 Q1 ENGINEERING, CIVIL
Huiming Yang;Wenzhuo Liu;Yicheng Qiao;Lei Yang;Xianzhu Zeng;Li Wang;Zhiwei Li;Zijian Zeng;Zhiying Jiang;Huaping Liu;Kunfeng Wang
{"title":"crosssray3d:几何和分布指导有效的多模态3D检测","authors":"Huiming Yang;Wenzhuo Liu;Yicheng Qiao;Lei Yang;Xianzhu Zeng;Li Wang;Zhiwei Li;Zijian Zeng;Zhiying Jiang;Huaping Liu;Kunfeng Wang","doi":"10.1109/TITS.2026.3651273","DOIUrl":null,"url":null,"abstract":"The sparse cross-modality detector offers more advantages than its counterpart, the Bird’s-Eye-View (BEV) detector, particularly in terms of adaptability for downstream tasks and computational cost savings. However, existing sparse detectors overlook the quality of token representation, leaving it with a sub-optimal foreground quality and limited performance. In this paper, we identify that the geometric structure preserved and the class distribution are the key to improving the performance of the sparse detector, and propose a Sparse Selector (SS). The core module of SS is Ray-Aware Supervision (RAS), which preserves rich geometric information during the training stage, and Class-Balanced Supervision, which adaptively reweights the salience of class semantics, ensuring that tokens associated with small objects are retained during token sampling. Thereby, outperforming other sparse multi-modal detectors in the representation of tokens. Additionally, we design Ray Positional Encoding (Ray PE) to address the distribution differences between the LiDAR modality and the image. Finally, we integrate the aforementioned module into an end-to-end sparse multi-modality detector, dubbed CrossRay3D. Experiments show that, on the challenging nuScenes benchmark, CrossRay3D achieves state-of-the-art performance with 72.4% mAP and 74.7% NDS, while running <inline-formula> <tex-math>$1.84\\times $ </tex-math></inline-formula> faster than other leading methods. Moreover, CrossRay3D demonstrates strong robustness even in scenarios where LiDAR or camera data are partially or entirely missing. The code is available on <uri>https://github.com/xuehaipiaoxiang/CrossRay3D</uri>","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"27 5","pages":"6027-6039"},"PeriodicalIF":8.4000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CrossRay3D: Geometry and Distribution Guidance for Efficient Multimodal 3D Detection\",\"authors\":\"Huiming Yang;Wenzhuo Liu;Yicheng Qiao;Lei Yang;Xianzhu Zeng;Li Wang;Zhiwei Li;Zijian Zeng;Zhiying Jiang;Huaping Liu;Kunfeng Wang\",\"doi\":\"10.1109/TITS.2026.3651273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The sparse cross-modality detector offers more advantages than its counterpart, the Bird’s-Eye-View (BEV) detector, particularly in terms of adaptability for downstream tasks and computational cost savings. However, existing sparse detectors overlook the quality of token representation, leaving it with a sub-optimal foreground quality and limited performance. In this paper, we identify that the geometric structure preserved and the class distribution are the key to improving the performance of the sparse detector, and propose a Sparse Selector (SS). The core module of SS is Ray-Aware Supervision (RAS), which preserves rich geometric information during the training stage, and Class-Balanced Supervision, which adaptively reweights the salience of class semantics, ensuring that tokens associated with small objects are retained during token sampling. Thereby, outperforming other sparse multi-modal detectors in the representation of tokens. Additionally, we design Ray Positional Encoding (Ray PE) to address the distribution differences between the LiDAR modality and the image. Finally, we integrate the aforementioned module into an end-to-end sparse multi-modality detector, dubbed CrossRay3D. Experiments show that, on the challenging nuScenes benchmark, CrossRay3D achieves state-of-the-art performance with 72.4% mAP and 74.7% NDS, while running <inline-formula> <tex-math>$1.84\\\\times $ </tex-math></inline-formula> faster than other leading methods. Moreover, CrossRay3D demonstrates strong robustness even in scenarios where LiDAR or camera data are partially or entirely missing. The code is available on <uri>https://github.com/xuehaipiaoxiang/CrossRay3D</uri>\",\"PeriodicalId\":13416,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Transportation Systems\",\"volume\":\"27 5\",\"pages\":\"6027-6039\"},\"PeriodicalIF\":8.4000,\"publicationDate\":\"2026-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Transportation Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11355997/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2026/1/16 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, CIVIL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Transportation Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11355997/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/16 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
引用次数: 0

摘要

稀疏交叉模态检测器比其对应物鸟瞰(BEV)检测器提供了更多的优势,特别是在对下游任务的适应性和节省计算成本方面。然而,现有的稀疏检测器忽略了令牌表示的质量,使其具有次优的前景质量和有限的性能。本文认为保留几何结构和类分布是提高稀疏检测器性能的关键,并提出了一种稀疏选择器(SS)。该算法的核心模块是光线感知监督(RAS)和类平衡监督(class - balanced Supervision), RAS在训练阶段保留丰富的几何信息,类平衡监督(class - balanced Supervision)自适应地重新加权类语义的显著性,确保在令牌采样过程中保留与小对象相关的令牌。因此,在标记表示方面优于其他稀疏多模态检测器。此外,我们设计了射线位置编码(Ray PE)来解决激光雷达模态和图像之间的分布差异。最后,我们将上述模块集成到端到端稀疏多模态检测器中,称为CrossRay3D。实验表明,在具有挑战性的nuScenes基准测试中,CrossRay3D实现了最先进的性能,mAP为72.4%,NDS为74.7%,运行速度比其他领先方法快1.84倍。此外,即使在激光雷达或相机数据部分或全部丢失的情况下,CrossRay3D也显示出强大的鲁棒性。代码可在https://github.com/xuehaipiaoxiang/CrossRay3D上获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CrossRay3D: Geometry and Distribution Guidance for Efficient Multimodal 3D Detection
The sparse cross-modality detector offers more advantages than its counterpart, the Bird’s-Eye-View (BEV) detector, particularly in terms of adaptability for downstream tasks and computational cost savings. However, existing sparse detectors overlook the quality of token representation, leaving it with a sub-optimal foreground quality and limited performance. In this paper, we identify that the geometric structure preserved and the class distribution are the key to improving the performance of the sparse detector, and propose a Sparse Selector (SS). The core module of SS is Ray-Aware Supervision (RAS), which preserves rich geometric information during the training stage, and Class-Balanced Supervision, which adaptively reweights the salience of class semantics, ensuring that tokens associated with small objects are retained during token sampling. Thereby, outperforming other sparse multi-modal detectors in the representation of tokens. Additionally, we design Ray Positional Encoding (Ray PE) to address the distribution differences between the LiDAR modality and the image. Finally, we integrate the aforementioned module into an end-to-end sparse multi-modality detector, dubbed CrossRay3D. Experiments show that, on the challenging nuScenes benchmark, CrossRay3D achieves state-of-the-art performance with 72.4% mAP and 74.7% NDS, while running $1.84\times $ faster than other leading methods. Moreover, CrossRay3D demonstrates strong robustness even in scenarios where LiDAR or camera data are partially or entirely missing. The code is available on https://github.com/xuehaipiaoxiang/CrossRay3D
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Intelligent Transportation Systems 工程技术-工程:电子与电气
CiteScore
14.80
自引率
12.90%
发文量
1872
审稿时长
7.5 months
期刊介绍: The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical activities among IEEE entities, and providing a focus for cooperative activities, both internally and externally.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书