Computers & Graphics-Uk最新文献

筛选
英文 中文
Fusing multi-stage clicks with deep feedback aggregation for interactive image segmentation 融合多阶段点击与深度反馈聚合的交互式图像分割
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-24 DOI: 10.1016/j.cag.2025.104445
Jianwu Long, Yuanqin Liu, Shaoyi Wang, Shuang Chen, Qi Luo
{"title":"Fusing multi-stage clicks with deep feedback aggregation for interactive image segmentation","authors":"Jianwu Long,&nbsp;Yuanqin Liu,&nbsp;Shaoyi Wang,&nbsp;Shuang Chen,&nbsp;Qi Luo","doi":"10.1016/j.cag.2025.104445","DOIUrl":"10.1016/j.cag.2025.104445","url":null,"abstract":"<div><div>The objective of interactive image segmentation is to generate a segmentation mask for the target object using minimal user interaction. During the interaction process, segmentation results from previous iterations are typically used as feedback to guide subsequent user input. However, existing approaches often concatenate user interactions, feedback, and low-level image features as direct inputs to the network, overlooking the high-level semantic information contained in the feedback and the issue of information dilution from click signals. To address these limitations, we propose a novel interactive image segmentation model called Multi-stage Click Fusion with deep Feedback Aggregation(MCFA). MCFA introduces a new information fusion strategy. Specifically, for feedback information, it refines previous-round feedback using deep features and integrates the optimized feedback into the feature representation. For user clicks, MCFA performs multi-stage fusion to enhance click propagation while constraining its direction through the refined feedback. Experimental results demonstrate that MCFA consistently outperforms existing methods across five benchmark datasets: GrabCut, Berkeley, SBD, DAVIS and CVC-ClinicDB.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104445"},"PeriodicalIF":2.8,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HR-2DGS: Hybrid regularization for sparse-view 3D reconstruction with 2D Gaussian splatting HR-2DGS:基于二维高斯溅射的稀疏视图三维重建的混合正则化
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-23 DOI: 10.1016/j.cag.2025.104444
Yong Tang, Jiawen Yan, Yu Li, Yu Liang, Feng Wang, Jing Zhao
{"title":"HR-2DGS: Hybrid regularization for sparse-view 3D reconstruction with 2D Gaussian splatting","authors":"Yong Tang,&nbsp;Jiawen Yan,&nbsp;Yu Li,&nbsp;Yu Liang,&nbsp;Feng Wang,&nbsp;Jing Zhao","doi":"10.1016/j.cag.2025.104444","DOIUrl":"10.1016/j.cag.2025.104444","url":null,"abstract":"<div><div>Sparse-view 3D reconstruction has garnered widespread attention due to its demand for high-quality reconstruction under low-sampling data conditions. Existing NeRF-based methods rely on dense views and substantial computational resources, while 3DGS is limited by multi-view inconsistency and insufficient geometric detail recovery, making it challenging to achieve ideal results in sparse-view scenarios. This paper introduces HR-2DGS, a novel hybrid regularization framework based on 2D Gaussian Splatting (2DGS), which significantly enhances multi-view consistency and geometric recovery by dynamically fusing monocular depth estimates with rendered depth maps, incorporating hybrid normal regularization techniques. To further refine local details, we introduce a per-pixel depth normalization that leverages each pixel’s neighborhood statistics to emphasize fine-scale geometric variations. Experimental results on the LLFF and DTU datasets demonstrate that HR-2DGS outperforms existing methods in terms of PSNR, SSIM, and LPIPS, while requiring only 2.5GB of memory and a few minutes of training time for efficient training and real-time rendering.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104444"},"PeriodicalIF":2.8,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepSES: Learning solvent-excluded surfaces via neural signed distance fields DeepSES:通过神经符号距离场学习溶剂排除表面
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-23 DOI: 10.1016/j.cag.2025.104392
Niklas Merk, Anna Sterzik, Kai Lawonn
{"title":"DeepSES: Learning solvent-excluded surfaces via neural signed distance fields","authors":"Niklas Merk,&nbsp;Anna Sterzik,&nbsp;Kai Lawonn","doi":"10.1016/j.cag.2025.104392","DOIUrl":"10.1016/j.cag.2025.104392","url":null,"abstract":"<div><div>The solvent-excluded surface (SES) is essential for revealing molecular shape and solvent accessibility in applications such as molecular modeling, drug discovery, and protein folding. Its signed distance field (SDF) delivers a continuous, differentiable surface representation that enables efficient rendering, analysis, and interaction in volumetric visualization frameworks. However, analytic methods that compute the SDF of the SES cannot run at interactive rates on large biomolecular complexes, and grid-based methods tend to result in significant approximation errors, depending on molecular size and grid resolution. We address these limitations with DeepSES, a neural inference pipeline that predicts the SES SDF directly from the computationally simpler van der Waals (vdW) SDF on a fixed high-resolution grid. By employing an adaptive volume-filtering scheme that directs processing only to visible regions near the molecular surface, DeepSES yields interactive frame rates irrespective of molecule size. By offering multiple network configurations, DeepSES enables practitioners to balance inference time against prediction accuracy. In benchmarks on molecules ranging from one thousand to nearly four million atoms, our fastest configuration achieves real-time frame rates with a sub-angstrom mean error, while our highest-accuracy variant sustains interactive performance and outperforms state-of-the-art methods in terms of surface quality. By replacing costly algorithmic solvers with selective neural prediction, DeepSES provides a scalable, high-resolution solution for interactive biomolecular visualization.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104392"},"PeriodicalIF":2.8,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Narrowing-Cascade splines for control nets that shed mesh lines 用于控制网的窄级联样条
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-22 DOI: 10.1016/j.cag.2025.104441
Serhat Cam , Erkan Gunpinar , Kȩstutis Karčiauskas , Jörg Peters
{"title":"Narrowing-Cascade splines for control nets that shed mesh lines","authors":"Serhat Cam ,&nbsp;Erkan Gunpinar ,&nbsp;Kȩstutis Karčiauskas ,&nbsp;Jörg Peters","doi":"10.1016/j.cag.2025.104441","DOIUrl":"10.1016/j.cag.2025.104441","url":null,"abstract":"<div><div>Quad-dominant meshes are popular with animation designers and can efficiently be generated from point clouds. To join primary regions, quad-dominant meshes include non-4-valent vertices and non-quad regions. To transition between regions of rich detail and simple shape, quad-dominant meshes commonly use a cascade of <span><math><mrow><mi>n</mi><mo>−</mo><mn>1</mn></mrow></math></span> triangles that reduce the number of parallel quad strips from <span><math><mrow><mi>n</mi><mo>+</mo><mn>1</mn></mrow></math></span> to 2. For these cascades, the Narrowing-Cascade spline, short NC<span><math><msup><mrow></mrow><mrow><mi>n</mi></mrow></msup></math></span>, provides a new shape-optimized <span><math><msup><mrow><mi>G</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span> spline surface. NC<span><math><msup><mrow></mrow><mrow><mi>n</mi></mrow></msup></math></span> can treat cascade meshes as B-spline-like control nets. For <span><math><mrow><mi>n</mi><mo>&gt;</mo><mn>3</mn></mrow></math></span>, as opposed to <span><math><mrow><mi>n</mi><mo>=</mo><mn>2</mn><mo>,</mo><mn>3</mn></mrow></math></span>, cascades have interior points that both guide and complicate the construction of the output tensor-product NC<span><math><msup><mrow></mrow><mrow><mspace></mspace></mrow></msup></math></span>spline. The NC<span><math><msup><mrow></mrow><mrow><mi>n</mi></mrow></msup></math></span> spline follows the input mesh, including interior points, and delivers a high-quality curved surface of low degree.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104441"},"PeriodicalIF":2.8,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-driven compact representation model for analysis and visualization of large-scale multivariate SAMR data 面向大规模多变量SAMR数据分析与可视化的特征驱动紧凑表示模型
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-20 DOI: 10.1016/j.cag.2025.104331
Yang Yang , Yu Pei , Yi Cao
{"title":"Feature-driven compact representation model for analysis and visualization of large-scale multivariate SAMR data","authors":"Yang Yang ,&nbsp;Yu Pei ,&nbsp;Yi Cao","doi":"10.1016/j.cag.2025.104331","DOIUrl":"10.1016/j.cag.2025.104331","url":null,"abstract":"<div><div>The storage overhead and I/O bottleneck of supercomputers creates a challenge in efficiently analyzing and visualizing large-scale multivariate SAMR data. It is thus necessary to greatly reduce the data size on the premise of maintaining data accuracy. In this paper, we propose a feature-driven compact representation model to handle structurally complex, high-dimensional, and nonlinear structured adaptive mesh refinement (SAMR) data for efficient storage, analysis, and visualization. We combine information-guided domain partition, distance-based dimensionality reduction, and error-bounded data representation to form a coherent three-component framework, achieving high compression ratios while ensuring low accuracy loss. Our approach addresses the key bottleneck in the visualization of large-scale multivariate SAMR data generated by massively parallel scientific simulations, namely the mutual restraint relationship between compression efficiency and data fidelity. We validate the effectiveness of our method using four datasets, the largest of which contains 4 billion grid points. Experimental results demonstrate that, compared with the state-of-the-art methods, our approach reduces data storage costs by approximately an order of magnitude while improving data reconstruction accuracy by nearly two orders of magnitude.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104331"},"PeriodicalIF":2.8,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PromptNavi: Text-to-image generation through interactive prompt visual exploration PromptNavi:通过交互式提示视觉探索生成文本到图像
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-19 DOI: 10.1016/j.cag.2025.104417
Bofei Huang , Haoran Xie
{"title":"PromptNavi: Text-to-image generation through interactive prompt visual exploration","authors":"Bofei Huang ,&nbsp;Haoran Xie","doi":"10.1016/j.cag.2025.104417","DOIUrl":"10.1016/j.cag.2025.104417","url":null,"abstract":"<div><div>Modern text-to-image generative models can create high-quality and impressive images, but require extensive trial-and-error to interpret user intents. To solve this issue, we propose PromptNavi, a visual exploration interface for node-based prompt composition leveraging large language models to enhance the efficiency of text-to-image generation. In contrast to conventional prompting interfaces, PromptNavi allows users to manipulate and combine visual attributes of target images directly to refine outputs iteratively. Our user study confirmed that the results generated using PromptNavi achieved significant improvements in user usability, reduced cognitive load, and superior image quality rated by independent evaluators. It is verified that users achieved better results with less effort across all measured dimensions, including creativity, atmosphere, coherence, and overall impression. We believe PromptNavi may bridge the gap between user intent and generative AI outputs, advancing human-centered generative AI by making generative models accessible to novices with an enhanced user experience. Source codes are available at: <span><span>https://github.com/BofeiHuang/PromptNavi</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104417"},"PeriodicalIF":2.8,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HR-IDF: Hessian-Regularized Implicit Displacement Fields for high precision industrial assembly representation 高精度工业装配表示的hessian正则化隐式位移场
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-18 DOI: 10.1016/j.cag.2025.104442
Linxu Guo , Yutaka Ohtake , Tatsuya Yatagawa , Tetsuya Shimmyo , Shoichiro Hosomi , Kazutoshi Miyamoto
{"title":"HR-IDF: Hessian-Regularized Implicit Displacement Fields for high precision industrial assembly representation","authors":"Linxu Guo ,&nbsp;Yutaka Ohtake ,&nbsp;Tatsuya Yatagawa ,&nbsp;Tetsuya Shimmyo ,&nbsp;Shoichiro Hosomi ,&nbsp;Kazutoshi Miyamoto","doi":"10.1016/j.cag.2025.104442","DOIUrl":"10.1016/j.cag.2025.104442","url":null,"abstract":"<div><div>Representing high-precision industrial assemblies characterized by complex structural features remains challenging. In this paper, we propose Hessian-Regularized Implicit Displacement Fields (HR-IDF), a framework that integrates a two-scale neural implicit representation with Hessian-based regularization. In a coarse-to-fine manner, our method generates a smooth base surface from mesh-sampled points and then refines it with a high-frequency displacement field to capture fine geometric details. Moreover, we introduce a relaxed off-surface loss that helps preserve a more consistent gradient in the generated SDF field, while suppressing ghost geometry and improving representation stability and fidelity. Extensive experiments on complex industrial assemblies and 3D models demonstrate that HR-IDF achieves a reliable solution for high-precision industrial applications.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104442"},"PeriodicalIF":2.8,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pancreatic duct centerline extraction for image unfolding in photon-counting CT 用于光子计数CT图像展开的胰管中心线提取
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-18 DOI: 10.1016/j.cag.2025.104426
Jie Yi Tan , Leonhard Rist , Abraham Ayala Hernandez , Michael Sühling , Erik Gudman Steuble Brandt , Andreas Maier , Oliver Taubmann
{"title":"Pancreatic duct centerline extraction for image unfolding in photon-counting CT","authors":"Jie Yi Tan ,&nbsp;Leonhard Rist ,&nbsp;Abraham Ayala Hernandez ,&nbsp;Michael Sühling ,&nbsp;Erik Gudman Steuble Brandt ,&nbsp;Andreas Maier ,&nbsp;Oliver Taubmann","doi":"10.1016/j.cag.2025.104426","DOIUrl":"10.1016/j.cag.2025.104426","url":null,"abstract":"<div><div>Pancreatic diseases are often only diagnosed at a late stage, and pancreatic cancer is the most feared due to a very high mortality. Abnormalities of the main pancreatic duct, such as blockages and dilatation, are often (early) signs of such pancreatic diseases, but are difficult to detect in standard Computed Tomography image series. Photon-Counting Computed Tomography with its higher resolution improves the detectability of this duct, allowing diagnostic assessment. A comprehensive visualization in a single view requires a centerline-based unfolding of the duct and pancreas. However, manual centerline annotation is tedious. To automate this process, we introduce a fully automated pipeline for pancreatic duct unfolding by robustly extracting the centerline using Dijkstra’s algorithm on a cost map derived from a segmentation probability map. The core contribution of this work lies in the processing of the data-driven cost map leading to a consistent centerline for generating CPR visualizations of the pancreas. To improve individual steps within the pipeline, we investigate further enhancements such as segmentation filtering and the topology-preserving skeleton recall loss. In the evaluation, we assess performance of our method on both ultra-high-resolution and regular PCCT images. We find that the centerline can be consistently extracted from both scan types, where the centerlines from the ultra-high resolution images exhibit a slightly lower median error of 0.58 mm compared to the 0.73 mm using the regular resolution.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104426"},"PeriodicalIF":2.8,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SynopFrame: Multiscale time-dependent visual abstraction framework for analyzing DNA nanotechnology simulations 用于分析DNA纳米技术模拟的多尺度时间依赖视觉抽象框架
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-17 DOI: 10.1016/j.cag.2025.104376
Deng Luo , Alexandre Kouyoumdjian , Ondřej Strnad , Haichao Miao , Ivan Barišić , Tobias Isenberg , Ivan Viola
{"title":"SynopFrame: Multiscale time-dependent visual abstraction framework for analyzing DNA nanotechnology simulations","authors":"Deng Luo ,&nbsp;Alexandre Kouyoumdjian ,&nbsp;Ondřej Strnad ,&nbsp;Haichao Miao ,&nbsp;Ivan Barišić ,&nbsp;Tobias Isenberg ,&nbsp;Ivan Viola","doi":"10.1016/j.cag.2025.104376","DOIUrl":"10.1016/j.cag.2025.104376","url":null,"abstract":"<div><div>We present an open-source framework, SynopFrame, that allows DNA nanotechnology (DNA-nano) experts to analyze and understand molecular dynamics simulation trajectories of their designs. We use a multiscale multi-dimensional abstraction space, connect the representations to a projected conformational space plot of the structure’s temporal sequence, and thus enable experts to analyze the dynamics of their structural designs and, specifically, failure cases of the assembly. In addition, our time-dependent abstraction representation allows the biologists, for the first time in a smooth and structurally clear way, to identify and observe temporal transitions of a DNA-nano design from one configuration to another, and to highlight important periods of the simulation for further analysis. We realize SynopFrame as a dashboard of the different synchronized 3D spatial and 2D schematic visual representations, with a color overlay to show essential properties such as the status of hydrogen bonds. The linking of the spatial, schematic, and abstract views ensures that users can effectively analyze the high-frequency motion. We also categorize the status of the hydrogen bonds into a new format to allow us to color-encode it and overlay it on the representations. To demonstrate the utility of SynopFrame, we describe example usage scenarios and report user feedback.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104376"},"PeriodicalIF":2.8,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145265913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DualPhys-GS: Dual physically-guided 3D Gaussian splatting for underwater scene reconstruction dualphysics - gs:用于水下场景重建的双物理引导3D高斯喷溅
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-16 DOI: 10.1016/j.cag.2025.104405
Jiachen Li, Guangzhi Han, Jin Wan, Yuan Gao, Delong Han
{"title":"DualPhys-GS: Dual physically-guided 3D Gaussian splatting for underwater scene reconstruction","authors":"Jiachen Li,&nbsp;Guangzhi Han,&nbsp;Jin Wan,&nbsp;Yuan Gao,&nbsp;Delong Han","doi":"10.1016/j.cag.2025.104405","DOIUrl":"10.1016/j.cag.2025.104405","url":null,"abstract":"<div><div>In 3D reconstruction of underwater scenes, traditional methods based on atmospheric optical models cannot effectively deal with the selective attenuation of light wavelengths and the effect of suspended particle scattering, which are unique to the water medium, and lead to color distortion, geometric artifacts, and collapsing phenomena at long distances. We propose the DualPhys-GS framework to achieve high-quality underwater reconstruction through a dual-path optimization mechanism. Our approach further develops a dual feature-guided attenuation-scattering modeling mechanism, the RGB-guided attenuation optimization model combines RGB features and depth information and can handle edge and structural details. In contrast, the multi-scale depth-aware scattering model captures scattering effects at different scales using a feature pyramid network and an attention mechanism. Meanwhile, we design several special loss functions. The attenuation scattering consistency loss ensures physical consistency. The water body type adaptive loss dynamically adjusts the weighting coefficients. The edge-aware scattering loss is used to maintain the sharpness of structural edges. The multi-scale feature loss helps to capture global and local structural information. In addition, we design a scene adaptive mechanism that can automatically identify the water-body-type characteristics (e.g., clear coral reef waters or turbid coastal waters) and dynamically adjust the scattering and attenuation parameters and optimization strategies. Experimental results show that our method outperforms existing methods in several metrics, especially in suspended matter-dense regions and long-distance scenes, and the reconstruction quality is significantly improved.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"133 ","pages":"Article 104405"},"PeriodicalIF":2.8,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信