形状相关拉普拉奇特征函数的神经表征

Yue Chang, Otman Benchekroun, Maurizio M. Chiaramonte, Peter Yichen Chen, Eitan Grinspun
{"title":"形状相关拉普拉奇特征函数的神经表征","authors":"Yue Chang, Otman Benchekroun, Maurizio M. Chiaramonte, Peter Yichen Chen, Eitan Grinspun","doi":"arxiv-2408.10099","DOIUrl":null,"url":null,"abstract":"The eigenfunctions of the Laplace operator are essential in mathematical\nphysics, engineering, and geometry processing. Typically, these are computed by\ndiscretizing the domain and performing eigendecomposition, tying the results to\na specific mesh. However, this method is unsuitable for\ncontinuously-parameterized shapes. We propose a novel representation for eigenfunctions in\ncontinuously-parameterized shape spaces, where eigenfunctions are spatial\nfields with continuous dependence on shape parameters, defined by minimal\nDirichlet energy, unit norm, and mutual orthogonality. We implement this with\nmultilayer perceptrons trained as neural fields, mapping shape parameters and\ndomain positions to eigenfunction values. A unique challenge is enforcing mutual orthogonality with respect to\ncausality, where the causal ordering varies across the shape space. Our\ntraining method therefore requires three interwoven concepts: (1) learning $n$\neigenfunctions concurrently by minimizing Dirichlet energy with unit norm\nconstraints; (2) filtering gradients during backpropagation to enforce causal\northogonality, preventing earlier eigenfunctions from being influenced by later\nones; (3) dynamically sorting the causal ordering based on eigenvalues to track\neigenvalue curve crossovers. We demonstrate our method on problems such as shape family analysis,\npredicting eigenfunctions for incomplete shapes, interactive shape\nmanipulation, and computing higher-dimensional eigenfunctions, on all of which\ntraditional methods fall short.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"284 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neural Representation of Shape-Dependent Laplacian Eigenfunctions\",\"authors\":\"Yue Chang, Otman Benchekroun, Maurizio M. Chiaramonte, Peter Yichen Chen, Eitan Grinspun\",\"doi\":\"arxiv-2408.10099\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The eigenfunctions of the Laplace operator are essential in mathematical\\nphysics, engineering, and geometry processing. Typically, these are computed by\\ndiscretizing the domain and performing eigendecomposition, tying the results to\\na specific mesh. However, this method is unsuitable for\\ncontinuously-parameterized shapes. We propose a novel representation for eigenfunctions in\\ncontinuously-parameterized shape spaces, where eigenfunctions are spatial\\nfields with continuous dependence on shape parameters, defined by minimal\\nDirichlet energy, unit norm, and mutual orthogonality. We implement this with\\nmultilayer perceptrons trained as neural fields, mapping shape parameters and\\ndomain positions to eigenfunction values. A unique challenge is enforcing mutual orthogonality with respect to\\ncausality, where the causal ordering varies across the shape space. Our\\ntraining method therefore requires three interwoven concepts: (1) learning $n$\\neigenfunctions concurrently by minimizing Dirichlet energy with unit norm\\nconstraints; (2) filtering gradients during backpropagation to enforce causal\\northogonality, preventing earlier eigenfunctions from being influenced by later\\nones; (3) dynamically sorting the causal ordering based on eigenvalues to track\\neigenvalue curve crossovers. We demonstrate our method on problems such as shape family analysis,\\npredicting eigenfunctions for incomplete shapes, interactive shape\\nmanipulation, and computing higher-dimensional eigenfunctions, on all of which\\ntraditional methods fall short.\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":\"284 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.10099\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.10099","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

拉普拉斯算子的特征函数在数学物理、工程和几何处理中至关重要。通常情况下,计算方法是将域离散化,然后进行特征分解,将结果与特定网格绑定。然而,这种方法不适合连续参数化的形状。我们为连续参数化形状空间中的特征函数提出了一种新的表示方法,其中特征函数是对形状参数具有连续依赖性的空间场,由最小 Dirichlet 能量、单位规范和相互正交性定义。我们使用训练成神经场的多层感知器来实现这一点,将形状参数和域位置映射到特征函数值。一个独特的挑战是在形状空间中因果排序各不相同的情况下,如何确保相互正交性。因此,我们的训练方法需要三个相互交织的概念:(1) 通过最小化具有单位规范约束的 Dirichlet 能量,同时学习 $n$ 特征函数;(2) 在反向传播过程中过滤梯度,以执行因果正交性,防止早期特征函数受到后期特征函数的影响;(3) 根据特征值动态排序因果顺序,以跟踪特征值曲线交叉。我们在形状族分析、预测不完整形状的特征函数、交互式形状操纵和计算高维特征函数等问题上演示了我们的方法,传统方法在这些问题上都存在不足。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Neural Representation of Shape-Dependent Laplacian Eigenfunctions
The eigenfunctions of the Laplace operator are essential in mathematical physics, engineering, and geometry processing. Typically, these are computed by discretizing the domain and performing eigendecomposition, tying the results to a specific mesh. However, this method is unsuitable for continuously-parameterized shapes. We propose a novel representation for eigenfunctions in continuously-parameterized shape spaces, where eigenfunctions are spatial fields with continuous dependence on shape parameters, defined by minimal Dirichlet energy, unit norm, and mutual orthogonality. We implement this with multilayer perceptrons trained as neural fields, mapping shape parameters and domain positions to eigenfunction values. A unique challenge is enforcing mutual orthogonality with respect to causality, where the causal ordering varies across the shape space. Our training method therefore requires three interwoven concepts: (1) learning $n$ eigenfunctions concurrently by minimizing Dirichlet energy with unit norm constraints; (2) filtering gradients during backpropagation to enforce causal orthogonality, preventing earlier eigenfunctions from being influenced by later ones; (3) dynamically sorting the causal ordering based on eigenvalues to track eigenvalue curve crossovers. We demonstrate our method on problems such as shape family analysis, predicting eigenfunctions for incomplete shapes, interactive shape manipulation, and computing higher-dimensional eigenfunctions, on all of which traditional methods fall short.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信