E. M. Mthunzi, Christopher Getschmann, Florian Echtler
{"title":"Fast 3D point-cloud segmentation for interactive surfaces","authors":"E. M. Mthunzi, Christopher Getschmann, Florian Echtler","doi":"10.1145/3447932.3491141","DOIUrl":null,"url":null,"abstract":"Easily accessible depth sensors have enabled using point-cloud data to augment tabletop surfaces in everyday environments. However, point-cloud operations are computationally expensive and challenging to perform in real-time, particularly when targeting embedded systems without a dedicated GPU. In this paper, we propose mitigating the high computational costs by segmenting candidate interaction regions near real-time. We contribute an open-source solution for variable depth cameras using CPU-based architectures. For validation, we employ Microsoft’s Azure Kinect and report achieved performance. Our initial findings show that our approach takes under to segment candidate interaction regions on a tabletop surface and reduces the data volume by up to 70%. We conclude by contrasting the performance of our solution against a model-fitting approach implemented by the SurfaceStreams toolkit. Our approach outperforms the RANSAC-based strategy within the context of our test scenario, segmenting a tabletop’s interaction region up to 94% faster. Our results show promise for point-cloud-based approaches, even when targeting embedded solutions with limited resources.","PeriodicalId":214635,"journal":{"name":"Companion Proceedings of the 2021 Conference on Interactive Surfaces and Spaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Proceedings of the 2021 Conference on Interactive Surfaces and Spaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3447932.3491141","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Easily accessible depth sensors have enabled using point-cloud data to augment tabletop surfaces in everyday environments. However, point-cloud operations are computationally expensive and challenging to perform in real-time, particularly when targeting embedded systems without a dedicated GPU. In this paper, we propose mitigating the high computational costs by segmenting candidate interaction regions near real-time. We contribute an open-source solution for variable depth cameras using CPU-based architectures. For validation, we employ Microsoft’s Azure Kinect and report achieved performance. Our initial findings show that our approach takes under to segment candidate interaction regions on a tabletop surface and reduces the data volume by up to 70%. We conclude by contrasting the performance of our solution against a model-fitting approach implemented by the SurfaceStreams toolkit. Our approach outperforms the RANSAC-based strategy within the context of our test scenario, segmenting a tabletop’s interaction region up to 94% faster. Our results show promise for point-cloud-based approaches, even when targeting embedded solutions with limited resources.