{"title":"2 - strea [M] YOLOV8:驾驶视频的目标和运动检测","authors":"Ozlem Okur;Mehmet Kilicarslan","doi":"10.1109/TIV.2024.3448631","DOIUrl":null,"url":null,"abstract":"Object detection has numerous applications in intelligent vehicles, as it is crucial to quickly determine an object's location and movement for autonomous driving. Traditionally, most algorithms handle these tasks in sequential steps, detecting objects based on appearance features in video frames, and then analyzing their behavior through frame tracking. This study presents a novel deep learning-based object and motion detection method that uniquely combines spatial and temporal information into a single framework. The motion pattern of objects is uniform across different object classes and appears as traces in the spatial-temporal domain. These object movements can be interpreted from motion profile images even in complex driving environments. Unlike two-stage methods that rely on detection and tracking, our approach directly learns object motion from a vast dataset of driving videos, demonstrating its efficiency and practicality. It is specifically designed to address the challenges encountered in dynamic driving scenarios, proving its effectiveness and relevance in practical applications. The goal is to quickly identify objects and their motion in the driving context. Our method excels in real-time performance with interpretable motion detection in the spatial-temporal domain. It also demonstrates high mean average precision, <inline-formula><tex-math>$\\mathbf {78\\%}$</tex-math></inline-formula>, and low mean average error, <inline-formula><tex-math>$\\mathbf {3.09^\\circ }$</tex-math></inline-formula>, on a publicly available dataset, further validating its effectiveness and reliability.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3166-3177"},"PeriodicalIF":14.3000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Two-Strea[M] YOLOV8: Object and Motion Detection in Driving Videos\",\"authors\":\"Ozlem Okur;Mehmet Kilicarslan\",\"doi\":\"10.1109/TIV.2024.3448631\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Object detection has numerous applications in intelligent vehicles, as it is crucial to quickly determine an object's location and movement for autonomous driving. Traditionally, most algorithms handle these tasks in sequential steps, detecting objects based on appearance features in video frames, and then analyzing their behavior through frame tracking. This study presents a novel deep learning-based object and motion detection method that uniquely combines spatial and temporal information into a single framework. The motion pattern of objects is uniform across different object classes and appears as traces in the spatial-temporal domain. These object movements can be interpreted from motion profile images even in complex driving environments. Unlike two-stage methods that rely on detection and tracking, our approach directly learns object motion from a vast dataset of driving videos, demonstrating its efficiency and practicality. It is specifically designed to address the challenges encountered in dynamic driving scenarios, proving its effectiveness and relevance in practical applications. The goal is to quickly identify objects and their motion in the driving context. Our method excels in real-time performance with interpretable motion detection in the spatial-temporal domain. It also demonstrates high mean average precision, <inline-formula><tex-math>$\\\\mathbf {78\\\\%}$</tex-math></inline-formula>, and low mean average error, <inline-formula><tex-math>$\\\\mathbf {3.09^\\\\circ }$</tex-math></inline-formula>, on a publicly available dataset, further validating its effectiveness and reliability.\",\"PeriodicalId\":36532,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Vehicles\",\"volume\":\"10 5\",\"pages\":\"3166-3177\"},\"PeriodicalIF\":14.3000,\"publicationDate\":\"2024-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Vehicles\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10663920/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Vehicles","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10663920/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Two-Strea[M] YOLOV8: Object and Motion Detection in Driving Videos
Object detection has numerous applications in intelligent vehicles, as it is crucial to quickly determine an object's location and movement for autonomous driving. Traditionally, most algorithms handle these tasks in sequential steps, detecting objects based on appearance features in video frames, and then analyzing their behavior through frame tracking. This study presents a novel deep learning-based object and motion detection method that uniquely combines spatial and temporal information into a single framework. The motion pattern of objects is uniform across different object classes and appears as traces in the spatial-temporal domain. These object movements can be interpreted from motion profile images even in complex driving environments. Unlike two-stage methods that rely on detection and tracking, our approach directly learns object motion from a vast dataset of driving videos, demonstrating its efficiency and practicality. It is specifically designed to address the challenges encountered in dynamic driving scenarios, proving its effectiveness and relevance in practical applications. The goal is to quickly identify objects and their motion in the driving context. Our method excels in real-time performance with interpretable motion detection in the spatial-temporal domain. It also demonstrates high mean average precision, $\mathbf {78\%}$, and low mean average error, $\mathbf {3.09^\circ }$, on a publicly available dataset, further validating its effectiveness and reliability.
期刊介绍:
The IEEE Transactions on Intelligent Vehicles (T-IV) is a premier platform for publishing peer-reviewed articles that present innovative research concepts, application results, significant theoretical findings, and application case studies in the field of intelligent vehicles. With a particular emphasis on automated vehicles within roadway environments, T-IV aims to raise awareness of pressing research and application challenges.
Our focus is on providing critical information to the intelligent vehicle community, serving as a dissemination vehicle for IEEE ITS Society members and others interested in learning about the state-of-the-art developments and progress in research and applications related to intelligent vehicles. Join us in advancing knowledge and innovation in this dynamic field.