Jeba Nega Cheltha , Chirag Sharma , Deepak Prashar , Arfat Ahmad Khan , Seifedine Kadry
{"title":"利用基于混合 RDA-WOA 的 RNN 和多假设跟踪增强人体运动检测,以处理遮挡问题","authors":"Jeba Nega Cheltha , Chirag Sharma , Deepak Prashar , Arfat Ahmad Khan , Seifedine Kadry","doi":"10.1016/j.imavis.2024.105234","DOIUrl":null,"url":null,"abstract":"<div><p>Human motion detection in complex scenarios poses challenges due to occlusions. This paper presents an integrated approach for accurate human motion detections by combining Adapted Canny Edge detection as a preprocessing step, backbone-modified Mask R-CNN for precise segmentation, Hybrid RDA-WOA-based RNN as a classification, and a Multiple-hypothesis model for effective occlusion handling. Adapted Canny Edge detection is utilized as an initial preprocessing step to highlight significant edges in the input image. The resulting edge map enhances object boundaries and highlights structural features, simplifying subsequent processing steps. The improved image is then passed through backbone-modified Mask R-CNN for the pixel-level segmentation of humans. Backbone-modified Mask R-CNN along with IoU, Euclidean Distance, and Z-Score recognizes moving objects in complex scenes exactly. After recognizing moving objects, the optimized Hybrid RDA-WOA-based RNN classifies humans. To handle the self-occlusion, Multiple Hypothesis Tracking (MHT) is used. Real-world situations frequently include occlusions where humans can be partially or completely hidden by objects. The proposed approach integrates a Multiple-hypothesis model into the detection pipeline to address this challenge. Moreover, the proposed human motion detection approach includes an optimized Hybrid RDA-WOA-based RNN trained with 2D representations of 3D skeletal motion. The proposed work was evaluated using the IXMAS, KTH, Weizmann, NTU RGB + D, and UCF101 Datasets. It achieved an accuracy of 98% on the IXMAS, KTH, Weizmann, and UCF101 Datasets and 97.1% on the NTU RGB + D Dataset. The simulation results unveil the superiority of the proposed methodology over the existing approaches.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"150 ","pages":"Article 105234"},"PeriodicalIF":4.2000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhanced human motion detection with hybrid RDA-WOA-based RNN and multiple hypothesis tracking for occlusion handling\",\"authors\":\"Jeba Nega Cheltha , Chirag Sharma , Deepak Prashar , Arfat Ahmad Khan , Seifedine Kadry\",\"doi\":\"10.1016/j.imavis.2024.105234\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Human motion detection in complex scenarios poses challenges due to occlusions. This paper presents an integrated approach for accurate human motion detections by combining Adapted Canny Edge detection as a preprocessing step, backbone-modified Mask R-CNN for precise segmentation, Hybrid RDA-WOA-based RNN as a classification, and a Multiple-hypothesis model for effective occlusion handling. Adapted Canny Edge detection is utilized as an initial preprocessing step to highlight significant edges in the input image. The resulting edge map enhances object boundaries and highlights structural features, simplifying subsequent processing steps. The improved image is then passed through backbone-modified Mask R-CNN for the pixel-level segmentation of humans. Backbone-modified Mask R-CNN along with IoU, Euclidean Distance, and Z-Score recognizes moving objects in complex scenes exactly. After recognizing moving objects, the optimized Hybrid RDA-WOA-based RNN classifies humans. To handle the self-occlusion, Multiple Hypothesis Tracking (MHT) is used. Real-world situations frequently include occlusions where humans can be partially or completely hidden by objects. The proposed approach integrates a Multiple-hypothesis model into the detection pipeline to address this challenge. Moreover, the proposed human motion detection approach includes an optimized Hybrid RDA-WOA-based RNN trained with 2D representations of 3D skeletal motion. The proposed work was evaluated using the IXMAS, KTH, Weizmann, NTU RGB + D, and UCF101 Datasets. It achieved an accuracy of 98% on the IXMAS, KTH, Weizmann, and UCF101 Datasets and 97.1% on the NTU RGB + D Dataset. The simulation results unveil the superiority of the proposed methodology over the existing approaches.</p></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"150 \",\"pages\":\"Article 105234\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885624003391\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624003391","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Enhanced human motion detection with hybrid RDA-WOA-based RNN and multiple hypothesis tracking for occlusion handling
Human motion detection in complex scenarios poses challenges due to occlusions. This paper presents an integrated approach for accurate human motion detections by combining Adapted Canny Edge detection as a preprocessing step, backbone-modified Mask R-CNN for precise segmentation, Hybrid RDA-WOA-based RNN as a classification, and a Multiple-hypothesis model for effective occlusion handling. Adapted Canny Edge detection is utilized as an initial preprocessing step to highlight significant edges in the input image. The resulting edge map enhances object boundaries and highlights structural features, simplifying subsequent processing steps. The improved image is then passed through backbone-modified Mask R-CNN for the pixel-level segmentation of humans. Backbone-modified Mask R-CNN along with IoU, Euclidean Distance, and Z-Score recognizes moving objects in complex scenes exactly. After recognizing moving objects, the optimized Hybrid RDA-WOA-based RNN classifies humans. To handle the self-occlusion, Multiple Hypothesis Tracking (MHT) is used. Real-world situations frequently include occlusions where humans can be partially or completely hidden by objects. The proposed approach integrates a Multiple-hypothesis model into the detection pipeline to address this challenge. Moreover, the proposed human motion detection approach includes an optimized Hybrid RDA-WOA-based RNN trained with 2D representations of 3D skeletal motion. The proposed work was evaluated using the IXMAS, KTH, Weizmann, NTU RGB + D, and UCF101 Datasets. It achieved an accuracy of 98% on the IXMAS, KTH, Weizmann, and UCF101 Datasets and 97.1% on the NTU RGB + D Dataset. The simulation results unveil the superiority of the proposed methodology over the existing approaches.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.