{"title":"Real-Time Pedestrian Detection using YOLO","authors":"Sarthak Mishra, S. Jabin","doi":"10.1109/REEDCON57544.2023.10151150","DOIUrl":null,"url":null,"abstract":"Detecting pedestrians in a crowded scene in real time is a challenging task in monitoring and managing crowd. Many researchers around the world have addressed this task and managed to achieve satisfactory results. However, the problem of automating detection of pedestrians in the crowd is still an open issue depending on the density of crowd in a scene. To ensure safety and security, automating the crowd detection and tracking process in real time is necessary in designing a robust and secure system. Detecting and localizing objects has successfully aided in identifying the major problems with detecting pedestrians and has been a major step forward in managing crowd automatically. In this paper, we have used tiny YOLOv4. YOLO (You Only Look Once) has proved quite useful in detecting and localizing objects in an image with impressive response speed. YOLO network usually scales an entire image into fixed sized grids and then identifies and detects the region into these grids using bounding boxes. Using transfer learning on an already trained YOLO inception model on COCO dataset, detection of pedestrians in surveillance videos is handled. The paper discusses the implementation and detection performance of the proposed YOLOv4 tiny model on the UCSD pedestrian Detection dataset with promising results.","PeriodicalId":429116,"journal":{"name":"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)","volume":"171 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/REEDCON57544.2023.10151150","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Detecting pedestrians in a crowded scene in real time is a challenging task in monitoring and managing crowd. Many researchers around the world have addressed this task and managed to achieve satisfactory results. However, the problem of automating detection of pedestrians in the crowd is still an open issue depending on the density of crowd in a scene. To ensure safety and security, automating the crowd detection and tracking process in real time is necessary in designing a robust and secure system. Detecting and localizing objects has successfully aided in identifying the major problems with detecting pedestrians and has been a major step forward in managing crowd automatically. In this paper, we have used tiny YOLOv4. YOLO (You Only Look Once) has proved quite useful in detecting and localizing objects in an image with impressive response speed. YOLO network usually scales an entire image into fixed sized grids and then identifies and detects the region into these grids using bounding boxes. Using transfer learning on an already trained YOLO inception model on COCO dataset, detection of pedestrians in surveillance videos is handled. The paper discusses the implementation and detection performance of the proposed YOLOv4 tiny model on the UCSD pedestrian Detection dataset with promising results.