Zipei Chen , Yumeng Li , Zhong Ren, Yao-Xiang Ding, Kun Zhou
{"title":"外观作为可靠的证据:调和外观和生成先验的单目运动估计","authors":"Zipei Chen , Yumeng Li , Zhong Ren, Yao-Xiang Ding, Kun Zhou","doi":"10.1016/j.cag.2025.104404","DOIUrl":null,"url":null,"abstract":"<div><div>Monocular motion estimation in real scenes is challenging with the presence of noisy and possibly occluded detections. The recent method proposes to introduce a diffusion-based generative motion prior, which treats input detections as noisy partial evidence and generates motion through denoising. This advances robustness and motion quality, yet regardless of whether the denoised motion is close to visual observation, which often causes misalignment. In this work, we propose to reconcile model appearance and motion prior, which enables appearance to play the crucial role of providing reliable noise-free visual evidence for accurate visual alignment. The appearance is modeled by the radiance of both scene and human for joint differentiable rendering. To achieve this with monocular RGB input without mask and depth, we propose a semantic-perturbed mode estimation method to faithfully estimate static scene radiance from dynamic input with complex occlusion relationships, and a polyline depth calibration method to leverage knowledge from depth estimation model to recover the missing depth information. Meanwhile, to leverage knowledge from motion prior and reconcile it with the appearance guidance during optimization, we also propose an occlusion-aware gradient merging strategy. Experimental results demonstrate that our method achieves better-aligned tracking results while maintaining competitive motion quality. Our code is released at <span><span>https://github.com/Zipei-Chen/Appearance-as-Reliable-Evidence-implementation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104404"},"PeriodicalIF":2.8000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Appearance as reliable evidence: Reconciling appearance and generative priors for monocular motion estimation\",\"authors\":\"Zipei Chen , Yumeng Li , Zhong Ren, Yao-Xiang Ding, Kun Zhou\",\"doi\":\"10.1016/j.cag.2025.104404\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Monocular motion estimation in real scenes is challenging with the presence of noisy and possibly occluded detections. The recent method proposes to introduce a diffusion-based generative motion prior, which treats input detections as noisy partial evidence and generates motion through denoising. This advances robustness and motion quality, yet regardless of whether the denoised motion is close to visual observation, which often causes misalignment. In this work, we propose to reconcile model appearance and motion prior, which enables appearance to play the crucial role of providing reliable noise-free visual evidence for accurate visual alignment. The appearance is modeled by the radiance of both scene and human for joint differentiable rendering. To achieve this with monocular RGB input without mask and depth, we propose a semantic-perturbed mode estimation method to faithfully estimate static scene radiance from dynamic input with complex occlusion relationships, and a polyline depth calibration method to leverage knowledge from depth estimation model to recover the missing depth information. Meanwhile, to leverage knowledge from motion prior and reconcile it with the appearance guidance during optimization, we also propose an occlusion-aware gradient merging strategy. Experimental results demonstrate that our method achieves better-aligned tracking results while maintaining competitive motion quality. Our code is released at <span><span>https://github.com/Zipei-Chen/Appearance-as-Reliable-Evidence-implementation</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50628,\"journal\":{\"name\":\"Computers & Graphics-Uk\",\"volume\":\"132 \",\"pages\":\"Article 104404\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2025-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Graphics-Uk\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0097849325002456\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325002456","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Appearance as reliable evidence: Reconciling appearance and generative priors for monocular motion estimation
Monocular motion estimation in real scenes is challenging with the presence of noisy and possibly occluded detections. The recent method proposes to introduce a diffusion-based generative motion prior, which treats input detections as noisy partial evidence and generates motion through denoising. This advances robustness and motion quality, yet regardless of whether the denoised motion is close to visual observation, which often causes misalignment. In this work, we propose to reconcile model appearance and motion prior, which enables appearance to play the crucial role of providing reliable noise-free visual evidence for accurate visual alignment. The appearance is modeled by the radiance of both scene and human for joint differentiable rendering. To achieve this with monocular RGB input without mask and depth, we propose a semantic-perturbed mode estimation method to faithfully estimate static scene radiance from dynamic input with complex occlusion relationships, and a polyline depth calibration method to leverage knowledge from depth estimation model to recover the missing depth information. Meanwhile, to leverage knowledge from motion prior and reconcile it with the appearance guidance during optimization, we also propose an occlusion-aware gradient merging strategy. Experimental results demonstrate that our method achieves better-aligned tracking results while maintaining competitive motion quality. Our code is released at https://github.com/Zipei-Chen/Appearance-as-Reliable-Evidence-implementation.
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.