F. Moldoveanu, A. Ciobanu, A. Morar, A. Moldoveanu, V. Asavei
{"title":"移动设备上深度图帧的实时分割","authors":"F. Moldoveanu, A. Ciobanu, A. Morar, A. Moldoveanu, V. Asavei","doi":"10.1109/CSCS.2019.00052","DOIUrl":null,"url":null,"abstract":"This paper presents a real-time method for segmentation of depth map video frames on devices with limited computational capabilities. The depth maps are segmented based on the edges extracted by using an edge detector kernel. The entire scene is divided into regions that are separated by their edges. Furthermore, for providing a spatio-temporal consistency of the data, an association step is introduced in order to match the regions in the current frame with the ones in the previous frames. The proposed method may be used in scenarios where the objects in the 3D scene are placed at different distances from the acquisition device. The method was tested on a Raspberry Pi 2 board and on a series of Apple mobile devices (iPhone 6s, iPhone 5s and iPad Mini), using depth data acquired with the Microsoft Kinect v2 sensor at a resolution of 512x424 pixels. Since the Raspberry Pi and the Apple devices do not have full support for the Kinect v2 sensor, for the tests we have used a recorded dataset. Also, we tested the performance of the algorithm on the Apple devices by using a compatible depth sensor developed by Occipital (Structure Senor).","PeriodicalId":352411,"journal":{"name":"2019 22nd International Conference on Control Systems and Computer Science (CSCS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real-Time Segmentation of Depth Map Frames on Mobile Devices\",\"authors\":\"F. Moldoveanu, A. Ciobanu, A. Morar, A. Moldoveanu, V. Asavei\",\"doi\":\"10.1109/CSCS.2019.00052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a real-time method for segmentation of depth map video frames on devices with limited computational capabilities. The depth maps are segmented based on the edges extracted by using an edge detector kernel. The entire scene is divided into regions that are separated by their edges. Furthermore, for providing a spatio-temporal consistency of the data, an association step is introduced in order to match the regions in the current frame with the ones in the previous frames. The proposed method may be used in scenarios where the objects in the 3D scene are placed at different distances from the acquisition device. The method was tested on a Raspberry Pi 2 board and on a series of Apple mobile devices (iPhone 6s, iPhone 5s and iPad Mini), using depth data acquired with the Microsoft Kinect v2 sensor at a resolution of 512x424 pixels. Since the Raspberry Pi and the Apple devices do not have full support for the Kinect v2 sensor, for the tests we have used a recorded dataset. Also, we tested the performance of the algorithm on the Apple devices by using a compatible depth sensor developed by Occipital (Structure Senor).\",\"PeriodicalId\":352411,\"journal\":{\"name\":\"2019 22nd International Conference on Control Systems and Computer Science (CSCS)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 22nd International Conference on Control Systems and Computer Science (CSCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSCS.2019.00052\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 22nd International Conference on Control Systems and Computer Science (CSCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSCS.2019.00052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
本文提出了一种在计算能力有限的设备上对深度图视频帧进行实时分割的方法。基于边缘检测核提取的边缘对深度图进行分割。整个场景被划分成由边缘分开的区域。此外,为了保证数据的时空一致性,引入了关联步骤,将当前帧中的区域与前一帧中的区域进行匹配。所提出的方法可用于将3D场景中的物体放置在与采集设备不同距离的场景中。该方法在Raspberry Pi 2板和一系列苹果移动设备(iPhone 6s、iPhone 5s和iPad Mini)上进行了测试,使用的是分辨率为512x424像素的微软Kinect v2传感器获取的深度数据。由于树莓派和苹果设备不完全支持Kinect v2传感器,我们在测试中使用了一个记录的数据集。此外,我们使用Occipital (Structure Senor)开发的兼容深度传感器在Apple设备上测试了算法的性能。
Real-Time Segmentation of Depth Map Frames on Mobile Devices
This paper presents a real-time method for segmentation of depth map video frames on devices with limited computational capabilities. The depth maps are segmented based on the edges extracted by using an edge detector kernel. The entire scene is divided into regions that are separated by their edges. Furthermore, for providing a spatio-temporal consistency of the data, an association step is introduced in order to match the regions in the current frame with the ones in the previous frames. The proposed method may be used in scenarios where the objects in the 3D scene are placed at different distances from the acquisition device. The method was tested on a Raspberry Pi 2 board and on a series of Apple mobile devices (iPhone 6s, iPhone 5s and iPad Mini), using depth data acquired with the Microsoft Kinect v2 sensor at a resolution of 512x424 pixels. Since the Raspberry Pi and the Apple devices do not have full support for the Kinect v2 sensor, for the tests we have used a recorded dataset. Also, we tested the performance of the algorithm on the Apple devices by using a compatible depth sensor developed by Occipital (Structure Senor).