Ruxin Zheng;Shunqiao Sun;Hongshan Liu;Honglei Chen;Jian Li
{"title":"Model-Based Knowledge-Driven Learning Approach for Enhanced High-Resolution Automotive Radar Imaging","authors":"Ruxin Zheng;Shunqiao Sun;Hongshan Liu;Honglei Chen;Jian Li","doi":"10.1109/TRS.2025.3563492","DOIUrl":null,"url":null,"abstract":"Millimeter-wave (mmWave) radars are indispensable for the perception tasks of autonomous vehicles, thanks to their resilience in challenging weather and light conditions. Yet, their deployment is often limited by insufficient spatial resolution for precise semantic scene interpretation. Classical super-resolution techniques adapted from optical imaging inadequately address the distinct characteristics of radar data. In response, our study herein redefines super-resolution radar imaging as a 1-D signal super-resolution spectral estimation problem by harnessing the radar domain knowledge, introducing innovative data normalization, signal-level augmentation, and a domain-informed signal-to-noise ratio (SNR)-guided loss function. Like an image drawn with points and lines, radar imaging can be viewed as generated from points (antenna elements) and lines (frequency spectra). Our tailored deep learning (DL) network for automotive radar imaging exhibits remarkable scalability and parameter efficiency, alongside enhanced performance in terms of radar imaging quality and resolution. We further present a novel real-world dataset, pivotal for both advancing radar imaging and refining super-resolution spectral estimation techniques. Extensive testing confirms that our super-resolution angular spectral estimation network (SR-SPECNet) sets a new benchmark in producing high-resolution radar range-azimuth (RA) images, outperforming existing methods. The source code and radar dataset utilized for evaluation will be made publicly available at <uri>https://github.com/ruxinzh/SR-SPECNet</uri>","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"709-723"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radar Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10974998/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Millimeter-wave (mmWave) radars are indispensable for the perception tasks of autonomous vehicles, thanks to their resilience in challenging weather and light conditions. Yet, their deployment is often limited by insufficient spatial resolution for precise semantic scene interpretation. Classical super-resolution techniques adapted from optical imaging inadequately address the distinct characteristics of radar data. In response, our study herein redefines super-resolution radar imaging as a 1-D signal super-resolution spectral estimation problem by harnessing the radar domain knowledge, introducing innovative data normalization, signal-level augmentation, and a domain-informed signal-to-noise ratio (SNR)-guided loss function. Like an image drawn with points and lines, radar imaging can be viewed as generated from points (antenna elements) and lines (frequency spectra). Our tailored deep learning (DL) network for automotive radar imaging exhibits remarkable scalability and parameter efficiency, alongside enhanced performance in terms of radar imaging quality and resolution. We further present a novel real-world dataset, pivotal for both advancing radar imaging and refining super-resolution spectral estimation techniques. Extensive testing confirms that our super-resolution angular spectral estimation network (SR-SPECNet) sets a new benchmark in producing high-resolution radar range-azimuth (RA) images, outperforming existing methods. The source code and radar dataset utilized for evaluation will be made publicly available at https://github.com/ruxinzh/SR-SPECNet