{"title":"Unleashing the power of generative adversarial networks: A novel machine learning approach for vehicle detection and localisation in the dark","authors":"Md Saif Hassan Onim, Hussain Nyeem, Md. Wahiduzzaman Khan Arnob, Arunima Dey Pooja","doi":"10.1049/ccs2.12085","DOIUrl":null,"url":null,"abstract":"<p>Machine vision in low-light conditions is a critical requirement for object detection in road transportation, particularly for assisted and autonomous driving scenarios. Existing vision-based techniques are limited to daylight traffic scenarios due to their reliance on adequate lighting and high frame rates. This paper presents a novel approach to tackle this problem by investigating Vehicle Detection and Localisation (VDL) in extremely low-light conditions by using a new machine learning model. Specifically, the proposed model employs two customised generative adversarial networks, based on Pix2PixGAN and CycleGAN, to enhance dark images for input into a YOLOv4-based VDL algorithm. The model's performance is thoroughly analysed and compared against the prominent models. Our findings validate that the proposed model detects and localises vehicles accurately in extremely dark images, with an additional run-time of approximately 11 ms and an accuracy improvement of 10%–50% compared to the other models. Moreover, our model demonstrates a 4%–8% increase in Intersection over Union (IoU) at a mean frame rate of 9 <i>fps</i>, which underscores its potential for broader applications in ubiquitous road-object detection. The results demonstrate the significance of the proposed model as an early step to overcoming the challenges of low-light vision in road-object detection and autonomous driving, paving the way for safer and more efficient transportation systems.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"5 3","pages":"169-180"},"PeriodicalIF":1.2000,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12085","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Machine vision in low-light conditions is a critical requirement for object detection in road transportation, particularly for assisted and autonomous driving scenarios. Existing vision-based techniques are limited to daylight traffic scenarios due to their reliance on adequate lighting and high frame rates. This paper presents a novel approach to tackle this problem by investigating Vehicle Detection and Localisation (VDL) in extremely low-light conditions by using a new machine learning model. Specifically, the proposed model employs two customised generative adversarial networks, based on Pix2PixGAN and CycleGAN, to enhance dark images for input into a YOLOv4-based VDL algorithm. The model's performance is thoroughly analysed and compared against the prominent models. Our findings validate that the proposed model detects and localises vehicles accurately in extremely dark images, with an additional run-time of approximately 11 ms and an accuracy improvement of 10%–50% compared to the other models. Moreover, our model demonstrates a 4%–8% increase in Intersection over Union (IoU) at a mean frame rate of 9 fps, which underscores its potential for broader applications in ubiquitous road-object detection. The results demonstrate the significance of the proposed model as an early step to overcoming the challenges of low-light vision in road-object detection and autonomous driving, paving the way for safer and more efficient transportation systems.