Danilo Cavaliere, Alessia Saggese, S. Senatore, M. Vento, V. Loia
{"title":"利用语义时空特征增强无人机场景感知能力","authors":"Danilo Cavaliere, Alessia Saggese, S. Senatore, M. Vento, V. Loia","doi":"10.1109/EE1.2018.8385272","DOIUrl":null,"url":null,"abstract":"The use of unmanned aerial vehicles (UAVs) is becoming a key asset in different application domains: from the military to surveillance tasks; to filming and journalism to shipping and delivery; to disaster monitoring to rescue operation and healthcare. One of the most desirable UAV capabilities is a human-like scenario understanding, i.e., the object recognition and interactions with other objects and with the environment, through the scene evolution, in order to get a high-view scenario description. The paper presents a semantic-enhanced approach for UAV-based surveillance systems. The video analysis is extended and enriched with semantic high level data to provide a global view of the video scenes. Semantic Web technologies provide the expressive power to describe semantically scenes appearing in the videos. The synergy between the video tracking methods and the semantic web technologies provides a new high-level human-like interpretation of the scenario. The approach focuses on the event understanding at semantic level: it is coded as spatio-temporal relation which joins fixed or mobile objects, with respect to a given temporal sequence of video frames. The system is composed of two macro components: one devoted to the tracking activities, i.e., the object identification and classification, the other enriches tracking data semantically, where the ontology-based scenario model is the bridge between the components. A reasoning component applied to the semantic knowledge, extracted from the scenario, infers new statements that describe the detected events occurring in the video.","PeriodicalId":173047,"journal":{"name":"2018 IEEE International Conference on Environmental Engineering (EE)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Empowering UAV scene perception by semantic spatio-temporal features\",\"authors\":\"Danilo Cavaliere, Alessia Saggese, S. Senatore, M. Vento, V. Loia\",\"doi\":\"10.1109/EE1.2018.8385272\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of unmanned aerial vehicles (UAVs) is becoming a key asset in different application domains: from the military to surveillance tasks; to filming and journalism to shipping and delivery; to disaster monitoring to rescue operation and healthcare. One of the most desirable UAV capabilities is a human-like scenario understanding, i.e., the object recognition and interactions with other objects and with the environment, through the scene evolution, in order to get a high-view scenario description. The paper presents a semantic-enhanced approach for UAV-based surveillance systems. The video analysis is extended and enriched with semantic high level data to provide a global view of the video scenes. Semantic Web technologies provide the expressive power to describe semantically scenes appearing in the videos. The synergy between the video tracking methods and the semantic web technologies provides a new high-level human-like interpretation of the scenario. The approach focuses on the event understanding at semantic level: it is coded as spatio-temporal relation which joins fixed or mobile objects, with respect to a given temporal sequence of video frames. The system is composed of two macro components: one devoted to the tracking activities, i.e., the object identification and classification, the other enriches tracking data semantically, where the ontology-based scenario model is the bridge between the components. A reasoning component applied to the semantic knowledge, extracted from the scenario, infers new statements that describe the detected events occurring in the video.\",\"PeriodicalId\":173047,\"journal\":{\"name\":\"2018 IEEE International Conference on Environmental Engineering (EE)\",\"volume\":\"145 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on Environmental Engineering (EE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EE1.2018.8385272\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Environmental Engineering (EE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EE1.2018.8385272","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Empowering UAV scene perception by semantic spatio-temporal features
The use of unmanned aerial vehicles (UAVs) is becoming a key asset in different application domains: from the military to surveillance tasks; to filming and journalism to shipping and delivery; to disaster monitoring to rescue operation and healthcare. One of the most desirable UAV capabilities is a human-like scenario understanding, i.e., the object recognition and interactions with other objects and with the environment, through the scene evolution, in order to get a high-view scenario description. The paper presents a semantic-enhanced approach for UAV-based surveillance systems. The video analysis is extended and enriched with semantic high level data to provide a global view of the video scenes. Semantic Web technologies provide the expressive power to describe semantically scenes appearing in the videos. The synergy between the video tracking methods and the semantic web technologies provides a new high-level human-like interpretation of the scenario. The approach focuses on the event understanding at semantic level: it is coded as spatio-temporal relation which joins fixed or mobile objects, with respect to a given temporal sequence of video frames. The system is composed of two macro components: one devoted to the tracking activities, i.e., the object identification and classification, the other enriches tracking data semantically, where the ontology-based scenario model is the bridge between the components. A reasoning component applied to the semantic knowledge, extracted from the scenario, infers new statements that describe the detected events occurring in the video.