{"title":"Parking–Occupancy Detection Through Adaptive Multisensor Camera-CNN Fusion","authors":"Vincent Lassen;Maximilian Lübke;Norman Franchi","doi":"10.1109/LSENS.2025.3593908","DOIUrl":null,"url":null,"abstract":"A robust multicamera sensor system for parking–occupancy detection is introduced, combining convolutional neural networks with an adaptive fusion mechanism that leverages angular diversity. The proposed pipeline integrates viewpoint-specific bounding-box components and a distortion–reduction module that compensates for perspective-induced deformations. Under different azimuth angles and illumination conditions, including overcast, sunny, and nighttime scenarios, the fusion approach consistently outperformed single-camera systems. Notably, fusing cameras at 0<inline-formula><tex-math>$^\\circ$</tex-math></inline-formula> and 90<inline-formula><tex-math>$^\\circ$</tex-math></inline-formula> yielded an intersection-over-union (IoU) of 0.898 without correction, while the distortion–reduction module improved IoU from 0.734 to 0.856 in geometrically challenging cases. The method also maintained robust performance in low-light environments, where individual camera views degraded. Designed for scalability and minimal calibration effort, the architecture supports geometry-consistent localization across multiple sensor perspectives. These results demonstrate that combining angular fusion with correction-aware processing offers substantial gains in precision and robustness. The system is particularly suited for real-world deployment in smart parking applications under complex environmental conditions.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 9","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11103573/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
A robust multicamera sensor system for parking–occupancy detection is introduced, combining convolutional neural networks with an adaptive fusion mechanism that leverages angular diversity. The proposed pipeline integrates viewpoint-specific bounding-box components and a distortion–reduction module that compensates for perspective-induced deformations. Under different azimuth angles and illumination conditions, including overcast, sunny, and nighttime scenarios, the fusion approach consistently outperformed single-camera systems. Notably, fusing cameras at 0$^\circ$ and 90$^\circ$ yielded an intersection-over-union (IoU) of 0.898 without correction, while the distortion–reduction module improved IoU from 0.734 to 0.856 in geometrically challenging cases. The method also maintained robust performance in low-light environments, where individual camera views degraded. Designed for scalability and minimal calibration effort, the architecture supports geometry-consistent localization across multiple sensor perspectives. These results demonstrate that combining angular fusion with correction-aware processing offers substantial gains in precision and robustness. The system is particularly suited for real-world deployment in smart parking applications under complex environmental conditions.