Enhanced vector quantization for data reduction and filtering

S. Ferrari, I. Frosio, V. Piuri, N. A. Borghese
{"title":"Enhanced vector quantization for data reduction and filtering","authors":"S. Ferrari, I. Frosio, V. Piuri, N. A. Borghese","doi":"10.1109/TDPVT.2004.1335275","DOIUrl":null,"url":null,"abstract":"Modern automatic digitizers can sample huge amounts of 3D data points on the object surface in a short time. Point based graphics is becoming a popular framework to reduce the cardinality of these data sets and to filter measurement noise, without having to store in memory and process mesh connectivity. Main contribution of this paper is the introduction of soft clustering techniques in the field of point clouds processing. In this approach data points are not assigned to a single cluster, but they contribute in the determination of the position of several cluster centres. As a result a better representation of the data is achieved. In soft clustering techniques, a data set is represented with a reduced number of points called reference vectors (RV), which minimize an adequate error measure. As the position of the RVs is determined by \"learning\", which can be viewed as an iterative optimization procedure, they are inherently slow. We show here how partitioning the data domain into disjointed regions called hyperboxes (HB), the computation can be localized and the computational time reduced to linear in the number of data points (O(N)), saving more than 75% on real applications with respect to classical soft-VQ solutions, making therefore VQ suitable to the task. The procedure is suitable for a parallel HW implementation, which would lead to a complexity sublinear in N. An automatic procedure for setting the voxel side and the other parameters can be derived from the data-set analysis. Results obtained in the reconstruction of faces of both humans and puppets as well as on models from clouds of points made available on the Web are reported and discussed in comparison with other available methods.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"71 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TDPVT.2004.1335275","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Modern automatic digitizers can sample huge amounts of 3D data points on the object surface in a short time. Point based graphics is becoming a popular framework to reduce the cardinality of these data sets and to filter measurement noise, without having to store in memory and process mesh connectivity. Main contribution of this paper is the introduction of soft clustering techniques in the field of point clouds processing. In this approach data points are not assigned to a single cluster, but they contribute in the determination of the position of several cluster centres. As a result a better representation of the data is achieved. In soft clustering techniques, a data set is represented with a reduced number of points called reference vectors (RV), which minimize an adequate error measure. As the position of the RVs is determined by "learning", which can be viewed as an iterative optimization procedure, they are inherently slow. We show here how partitioning the data domain into disjointed regions called hyperboxes (HB), the computation can be localized and the computational time reduced to linear in the number of data points (O(N)), saving more than 75% on real applications with respect to classical soft-VQ solutions, making therefore VQ suitable to the task. The procedure is suitable for a parallel HW implementation, which would lead to a complexity sublinear in N. An automatic procedure for setting the voxel side and the other parameters can be derived from the data-set analysis. Results obtained in the reconstruction of faces of both humans and puppets as well as on models from clouds of points made available on the Web are reported and discussed in comparison with other available methods.
增强矢量量化的数据减少和过滤
现代自动数字化仪可以在短时间内对物体表面的大量三维数据点进行采样。基于点的图形正在成为一种流行的框架,它可以减少这些数据集的基数并过滤测量噪声,而不必存储在内存中并处理网格连接。本文的主要贡献是在点云处理领域引入了软聚类技术。在这种方法中,数据点不分配给单个聚类,但它们有助于确定几个聚类中心的位置。因此,可以更好地表示数据。在软聚类技术中,数据集用称为参考向量(RV)的减少数量的点来表示,这样可以最小化适当的误差度量。由于rv的位置是通过“学习”来确定的,这可以看作是一个迭代的优化过程,因此它们天生就很慢。我们在这里展示了如何将数据域划分为称为超框(HB)的不相交区域,计算可以局部化,计算时间减少到数据点数量的线性(O(N)),相对于经典的软VQ解决方案,在实际应用中节省了75%以上,因此使VQ适合于任务。该方法适用于并行硬件实现,但其复杂度为n次线性。通过对数据集的分析,可以推导出设置体素侧和其他参数的自动过程。本文报道了人类和木偶的面部重建结果,并与其他可用的方法进行了比较,讨论了从网络上提供的点云模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信