基于KAN的单目人体视频快速重建

IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Xiaolin Ma;Yifei Zha;Zehua Dong;Hailan Kuang;Xinhua Liu
{"title":"基于KAN的单目人体视频快速重建","authors":"Xiaolin Ma;Yifei Zha;Zehua Dong;Hailan Kuang;Xinhua Liu","doi":"10.1109/JSEN.2025.3573354","DOIUrl":null,"url":null,"abstract":"Creating 3-D digital people from monocular video provides many possibilities for a wide range of users and rich applications. In this article, we propose a fast, high-quality, and effective method for creating 3-D digital humans from monocular videos, achieving fast training (2.5 min) and real-time rendering. Specifically, we use 3-D Gaussian splatting (3DGS), based on the introduction of skinned multiperson linear model (SMPL) human structure prior, and an optimized Kolmogorov-Arnold network (KAN) neural network to build effective posture and linear blend skinning (LBS) weight estimation module to quickly and accurately learn the fine details of the 3-D human body. In addition, to achieve fast optimization in the densification and prune stages, we propose a two-stage optimization method. First, the local 3-D area that needs to be densified is extracted based on LightGlue, and then KL divergence combined with human body prior is further used to guide Gaussian splitting/cloning and merging operations. We conducted extensive experiments on the ZJU_MoCap dataset, and the peak signal-to-noise ratio (PSNR) and learned perceptual image patch similarity (LPIPS) metrics indicate that we effectively improved rendering quality while ensuring rendering speed.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 13","pages":"24509-24516"},"PeriodicalIF":4.3000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast Reconstruction of Monocular Human Video Based on KAN\",\"authors\":\"Xiaolin Ma;Yifei Zha;Zehua Dong;Hailan Kuang;Xinhua Liu\",\"doi\":\"10.1109/JSEN.2025.3573354\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Creating 3-D digital people from monocular video provides many possibilities for a wide range of users and rich applications. In this article, we propose a fast, high-quality, and effective method for creating 3-D digital humans from monocular videos, achieving fast training (2.5 min) and real-time rendering. Specifically, we use 3-D Gaussian splatting (3DGS), based on the introduction of skinned multiperson linear model (SMPL) human structure prior, and an optimized Kolmogorov-Arnold network (KAN) neural network to build effective posture and linear blend skinning (LBS) weight estimation module to quickly and accurately learn the fine details of the 3-D human body. In addition, to achieve fast optimization in the densification and prune stages, we propose a two-stage optimization method. First, the local 3-D area that needs to be densified is extracted based on LightGlue, and then KL divergence combined with human body prior is further used to guide Gaussian splitting/cloning and merging operations. We conducted extensive experiments on the ZJU_MoCap dataset, and the peak signal-to-noise ratio (PSNR) and learned perceptual image patch similarity (LPIPS) metrics indicate that we effectively improved rendering quality while ensuring rendering speed.\",\"PeriodicalId\":447,\"journal\":{\"name\":\"IEEE Sensors Journal\",\"volume\":\"25 13\",\"pages\":\"24509-24516\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-06-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Sensors Journal\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11021315/\",\"RegionNum\":2,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Journal","FirstCategoryId":"103","ListUrlMain":"https://ieeexplore.ieee.org/document/11021315/","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

从单目视频中创建三维数字人物为广泛的用户和丰富的应用提供了许多可能性。在本文中,我们提出了一种快速、高质量和有效的方法,用于从单目视频中创建3d数字人,实现快速训练(2.5分钟)和实时渲染。具体来说,我们在引入蒙皮多人线性模型(SMPL)人体结构先验的基础上,利用三维高斯飞溅(3DGS)技术,通过优化的Kolmogorov-Arnold网络(KAN)神经网络构建有效的姿态和线性混合蒙皮(LBS)权重估计模块,快速准确地学习三维人体的精细细节。此外,为了实现密实和修剪阶段的快速优化,我们提出了一种两阶段优化方法。首先基于LightGlue提取需要致密化的局部三维区域,然后利用结合人体先验的KL散度指导高斯分裂/克隆和合并操作。我们在ZJU_MoCap数据集上进行了大量的实验,峰值信噪比(PSNR)和习得的感知图像补丁相似度(LPIPS)指标表明,我们在保证渲染速度的同时有效地提高了渲染质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fast Reconstruction of Monocular Human Video Based on KAN
Creating 3-D digital people from monocular video provides many possibilities for a wide range of users and rich applications. In this article, we propose a fast, high-quality, and effective method for creating 3-D digital humans from monocular videos, achieving fast training (2.5 min) and real-time rendering. Specifically, we use 3-D Gaussian splatting (3DGS), based on the introduction of skinned multiperson linear model (SMPL) human structure prior, and an optimized Kolmogorov-Arnold network (KAN) neural network to build effective posture and linear blend skinning (LBS) weight estimation module to quickly and accurately learn the fine details of the 3-D human body. In addition, to achieve fast optimization in the densification and prune stages, we propose a two-stage optimization method. First, the local 3-D area that needs to be densified is extracted based on LightGlue, and then KL divergence combined with human body prior is further used to guide Gaussian splitting/cloning and merging operations. We conducted extensive experiments on the ZJU_MoCap dataset, and the peak signal-to-noise ratio (PSNR) and learned perceptual image patch similarity (LPIPS) metrics indicate that we effectively improved rendering quality while ensuring rendering speed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Sensors Journal
IEEE Sensors Journal 工程技术-工程:电子与电气
CiteScore
7.70
自引率
14.00%
发文量
2058
审稿时长
5.2 months
期刊介绍: The fields of interest of the IEEE Sensors Journal are the theory, design , fabrication, manufacturing and applications of devices for sensing and transducing physical, chemical and biological phenomena, with emphasis on the electronics and physics aspect of sensors and integrated sensors-actuators. IEEE Sensors Journal deals with the following: -Sensor Phenomenology, Modelling, and Evaluation -Sensor Materials, Processing, and Fabrication -Chemical and Gas Sensors -Microfluidics and Biosensors -Optical Sensors -Physical Sensors: Temperature, Mechanical, Magnetic, and others -Acoustic and Ultrasonic Sensors -Sensor Packaging -Sensor Networks -Sensor Applications -Sensor Systems: Signals, Processing, and Interfaces -Actuators and Sensor Power Systems -Sensor Signal Processing for high precision and stability (amplification, filtering, linearization, modulation/demodulation) and under harsh conditions (EMC, radiation, humidity, temperature); energy consumption/harvesting -Sensor Data Processing (soft computing with sensor data, e.g., pattern recognition, machine learning, evolutionary computation; sensor data fusion, processing of wave e.g., electromagnetic and acoustic; and non-wave, e.g., chemical, gravity, particle, thermal, radiative and non-radiative sensor data, detection, estimation and classification based on sensor data) -Sensors in Industrial Practice
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信