F. Moldoveanu, A. Ciobanu, A. Morar, A. Moldoveanu, V. Asavei
{"title":"Real-Time Segmentation of Depth Map Frames on Mobile Devices","authors":"F. Moldoveanu, A. Ciobanu, A. Morar, A. Moldoveanu, V. Asavei","doi":"10.1109/CSCS.2019.00052","DOIUrl":null,"url":null,"abstract":"This paper presents a real-time method for segmentation of depth map video frames on devices with limited computational capabilities. The depth maps are segmented based on the edges extracted by using an edge detector kernel. The entire scene is divided into regions that are separated by their edges. Furthermore, for providing a spatio-temporal consistency of the data, an association step is introduced in order to match the regions in the current frame with the ones in the previous frames. The proposed method may be used in scenarios where the objects in the 3D scene are placed at different distances from the acquisition device. The method was tested on a Raspberry Pi 2 board and on a series of Apple mobile devices (iPhone 6s, iPhone 5s and iPad Mini), using depth data acquired with the Microsoft Kinect v2 sensor at a resolution of 512x424 pixels. Since the Raspberry Pi and the Apple devices do not have full support for the Kinect v2 sensor, for the tests we have used a recorded dataset. Also, we tested the performance of the algorithm on the Apple devices by using a compatible depth sensor developed by Occipital (Structure Senor).","PeriodicalId":352411,"journal":{"name":"2019 22nd International Conference on Control Systems and Computer Science (CSCS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 22nd International Conference on Control Systems and Computer Science (CSCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSCS.2019.00052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a real-time method for segmentation of depth map video frames on devices with limited computational capabilities. The depth maps are segmented based on the edges extracted by using an edge detector kernel. The entire scene is divided into regions that are separated by their edges. Furthermore, for providing a spatio-temporal consistency of the data, an association step is introduced in order to match the regions in the current frame with the ones in the previous frames. The proposed method may be used in scenarios where the objects in the 3D scene are placed at different distances from the acquisition device. The method was tested on a Raspberry Pi 2 board and on a series of Apple mobile devices (iPhone 6s, iPhone 5s and iPad Mini), using depth data acquired with the Microsoft Kinect v2 sensor at a resolution of 512x424 pixels. Since the Raspberry Pi and the Apple devices do not have full support for the Kinect v2 sensor, for the tests we have used a recorded dataset. Also, we tested the performance of the algorithm on the Apple devices by using a compatible depth sensor developed by Occipital (Structure Senor).