Cheng Peng;Yichun Su;Zhen Wang;Jianquan Chen;Jiaping Wang;Xiangmo Zhao;Xiaopeng Li
{"title":"Efficient Critical Data Generation Framework for Vision Sensors of Autonomous Vehicle Perception System","authors":"Cheng Peng;Yichun Su;Zhen Wang;Jianquan Chen;Jiaping Wang;Xiangmo Zhao;Xiaopeng Li","doi":"10.1109/LSENS.2025.3580749","DOIUrl":null,"url":null,"abstract":"Visual sensors are essential for the perception systems of autonomous vehicles (AVs) and for ensuring driving safety. While data-driven perception methods perform well in common scenes, they often struggle with critical situations, leading to potential system failures and accidents. To overcome these, we propose a novel approach that fine-tunes large language models integrating a heuristic scene interpreter and employs inspired visual data generation techniques to produce data that closely mimics real-world conditions. This method is implemented on the device designed to inject the generated visual data directly into real sensors, enabling accurate performance assessments of AV perception systems. Authenticity, rationality, and quality of the generated scenes are evaluated through extensive experiments. Experimental results demonstrate that our method significantly enhances the generation of critical data and underscores the substantial value of our approach to improving the safety and reliability of AVs.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 7","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11039159/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Visual sensors are essential for the perception systems of autonomous vehicles (AVs) and for ensuring driving safety. While data-driven perception methods perform well in common scenes, they often struggle with critical situations, leading to potential system failures and accidents. To overcome these, we propose a novel approach that fine-tunes large language models integrating a heuristic scene interpreter and employs inspired visual data generation techniques to produce data that closely mimics real-world conditions. This method is implemented on the device designed to inject the generated visual data directly into real sensors, enabling accurate performance assessments of AV perception systems. Authenticity, rationality, and quality of the generated scenes are evaluated through extensive experiments. Experimental results demonstrate that our method significantly enhances the generation of critical data and underscores the substantial value of our approach to improving the safety and reliability of AVs.