Computers & Graphics-Uk最新文献

筛选
英文 中文
Denoising-While-Completing Network (DWCNet): Robust point cloud completion under corruption 同时补全去噪网络(DWCNet):在损坏情况下的鲁棒点云补全
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-02 DOI: 10.1016/j.cag.2025.104401
Keneni W. Tesema , Lyndon Hill , Mark W. Jones , Gary K.L. Tam
{"title":"Denoising-While-Completing Network (DWCNet): Robust point cloud completion under corruption","authors":"Keneni W. Tesema ,&nbsp;Lyndon Hill ,&nbsp;Mark W. Jones ,&nbsp;Gary K.L. Tam","doi":"10.1016/j.cag.2025.104401","DOIUrl":"10.1016/j.cag.2025.104401","url":null,"abstract":"<div><div>Point cloud completion is crucial for 3D computer vision tasks in autonomous driving, augmented reality, and robotics. However, obtaining clean and complete point clouds from real-world environments is challenging due to noise and occlusions. Consequently, most existing completion networks – trained on synthetic data – struggle with real-world degradations. In this work, we tackle the problem of completing and denoising highly corrupted partial point clouds affected by multiple simultaneous degradations. To benchmark robustness, we introduce the Corrupted Point Cloud Completion Dataset (CPCCD), which highlights the limitations of current methods under diverse corruptions. Building on these insights, we propose DWCNet (Denoising-While-Completing Network), a completion framework enhanced with a Noise Management Module (NMM) that leverages contrastive learning and self-attention to suppress noise and model structural relationships. DWCNet achieves state-of-the-art performance on both clean and corrupted, synthetic and real-world datasets. The dataset and code will be publicly available at <span><span>https://github.com/keneniwt/DWCNET-Robust-Point-Cloud-Completion-against-Corruptions</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104401"},"PeriodicalIF":2.8,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145003808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note for Issue 131 of Computers & Graphics 《计算机与图形学》第131期的编辑说明
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-02 DOI: 10.1016/j.cag.2025.104420
{"title":"Editorial Note for Issue 131 of Computers & Graphics","authors":"","doi":"10.1016/j.cag.2025.104420","DOIUrl":"10.1016/j.cag.2025.104420","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104420"},"PeriodicalIF":2.8,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144996679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AortaAnalyzer: Interactive, integrated CTA aorta segmentation and quantitative analysis platform AortaAnalyzer:交互式集成CTA主动脉分割定量分析平台
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-02 DOI: 10.1016/j.cag.2025.104415
Fabienne von Deylen, Pepe Eulzer, Kai Lawonn
{"title":"AortaAnalyzer: Interactive, integrated CTA aorta segmentation and quantitative analysis platform","authors":"Fabienne von Deylen,&nbsp;Pepe Eulzer,&nbsp;Kai Lawonn","doi":"10.1016/j.cag.2025.104415","DOIUrl":"10.1016/j.cag.2025.104415","url":null,"abstract":"<div><div>The diagnosis of aortic diseases could be significantly enhanced with modern advances in model-based vessel visualization, objective parameter quantification, as well as information gained through numerical blood flow simulation. Most state-of-the-art methods, however, require heavy processing and are often split across various frameworks that require setting up complex workflows, making many clinical applications unrealistic and hindering research on large datasets. We present the AortaAnalyzer, a unified, end-to-end pipeline for processing computed-tomography angiography (CTA) of the aorta, integrating a state-of-the-art 3D segmentation network (Dice <span><math><mrow><mn>0</mn><mo>.</mo><mn>95</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>01</mn></mrow></math></span>, HD95 <span><math><mrow><mn>5</mn><mo>.</mo><mn>25</mn><mo>±</mo><mn>5</mn><mo>.</mo><mn>73</mn></mrow></math></span> <!--> <!-->mm), interactive correction tools, automated surface extraction, robust centerline computation, inlet/outlet capping for numerical hemodynamics, and clinical metric quantification. All modules share a single GUI, use standard formats (nrrd, STL, OBJ, CSV), and propagate changes automatically, eliminating complex multi-tool workflows. We developed the framework in an iterative process based on evaluations with seven independent experts—two numerical hemodynamics researchers, two vessel visualization researchers, two cardiac surgeons, and one radiologist. The framework received high usefulness ratings and feature requests drove the addition of surface capping and extended metric measurements. To assess efficiency, we compared processing time against 3D Slicer and SimVascular. The AortaAnalyzer demonstrated increased robustness and required substantially less manual interaction and overall processing time. AortaAnalyzer supports both clinical assessment and research purposes by providing rapid visualization of the vessel morphology, reproducible diameter, volume, and landmark analysis, and accelerated pre-processing for blood-flow simulation. It is open access and serves as an extendable platform.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104415"},"PeriodicalIF":2.8,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145003716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UNet-assisted parameterization for B-spline surface approximation b样条曲面逼近的unet辅助参数化
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-01 DOI: 10.1016/j.cag.2025.104416
Wenqiang Tang , Zhouwang Yang
{"title":"UNet-assisted parameterization for B-spline surface approximation","authors":"Wenqiang Tang ,&nbsp;Zhouwang Yang","doi":"10.1016/j.cag.2025.104416","DOIUrl":"10.1016/j.cag.2025.104416","url":null,"abstract":"<div><div>B-spline surface parameterization is a challenging task due to its complex, nonlinear, and non-convex nature. Traditional optimization-based methods are often sensitive to initialization, susceptible to local minima, and computationally expensive, especially in large-scale scenarios. To address these limitations, we propose SPUNet (Surface Parameterization UNet), a deep learning-based framework that reformulates B-spline surface parameterization as a high-dimensional neural network optimization problem. SPUNet employs a U-shaped architecture to learn a mapping from initial parameterizations to optimized ones, significantly improving robustness to initialization and alleviating the issue of local minima. Our method is applicable to both structured and unstructured point clouds and integrates smoothness regularization and an adaptive top-<span><math><mi>K</mi></math></span> loss to enhance reconstruction accuracy. Extensive experiments demonstrate the effectiveness, robustness, and scalability of the proposed approach in B-spline surface approximation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104416"},"PeriodicalIF":2.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145007618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Democratizing interactivity: An overview of interfaces for multimedia machine learning 交互性民主化:多媒体机器学习界面概述
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-01 DOI: 10.1016/j.cag.2025.104412
Alberto Arkader Kopiler , Guilherme Schardong , Luiz Schirmer , Daniel Perazzo , Tiago Novello , Luiz Velho
{"title":"Democratizing interactivity: An overview of interfaces for multimedia machine learning","authors":"Alberto Arkader Kopiler ,&nbsp;Guilherme Schardong ,&nbsp;Luiz Schirmer ,&nbsp;Daniel Perazzo ,&nbsp;Tiago Novello ,&nbsp;Luiz Velho","doi":"10.1016/j.cag.2025.104412","DOIUrl":"10.1016/j.cag.2025.104412","url":null,"abstract":"<div><div>This paper provides an overview of interactive human–computer interfaces designed for multimedia processing pipelines that integrate machine learning, image processing, and computer graphics. It serves as a practical guide to existing techniques and tools for developing interactive applications in this domain. We outline key prerequisites, present relevant tools, and describe experiments that highlight the integration of these technologies. The study addresses usability challenges in interactive multimedia analysis and synthesis, taking advantage of recent advances in generative AI and multimodal data processing. Using real-time 2D and 3D interaction, we explore the design of dynamic interfaces that enable users to manipulate and visualize data within machine learning workflows, such as facial landmark detection and image morphing. Through case studies, we show accessible web-based frameworks that support the development of interactive, mobile-friendly applications that facilitate broader user engagement across platforms.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104412"},"PeriodicalIF":2.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145019502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scanmove: Motion prediction and transfer for unregistered body meshes Scanmove:运动预测和转移未注册的身体网格
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-01 DOI: 10.1016/j.cag.2025.104409
Thomas Besnier , Sylvain Arguillère , Mohamed Daoudi
{"title":"Scanmove: Motion prediction and transfer for unregistered body meshes","authors":"Thomas Besnier ,&nbsp;Sylvain Arguillère ,&nbsp;Mohamed Daoudi","doi":"10.1016/j.cag.2025.104409","DOIUrl":"10.1016/j.cag.2025.104409","url":null,"abstract":"<div><div>Unregistered surface meshes, especially raw 3D scans, present significant challenges for automatic computation of plausible deformations due to the lack of established point-wise correspondences and the presence of noise in the data. In this paper, we propose a new, rig-free, data-driven framework for motion prediction and transfer on such body meshes. Our method couples a robust motion embedding network with a learned per-vertex feature field to generate a spatio-temporal deformation field, which drives the mesh deformation. Extensive evaluations, including quantitative benchmarks and qualitative visuals on tasks such as walking and running, demonstrate the effectiveness and versatility of our approach on challenging unregistered meshes.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104409"},"PeriodicalIF":2.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STRive: An association rule-based system for the exploration of spatiotemporal categorical data STRive:一个基于关联规则的时空分类数据探索系统
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-01 DOI: 10.1016/j.cag.2025.104410
Mauro Diaz , Luis Sante , Joel Perca , João Victor da Silva , Nivan Ferreira , Jorge Poco
{"title":"STRive: An association rule-based system for the exploration of spatiotemporal categorical data","authors":"Mauro Diaz ,&nbsp;Luis Sante ,&nbsp;Joel Perca ,&nbsp;João Victor da Silva ,&nbsp;Nivan Ferreira ,&nbsp;Jorge Poco","doi":"10.1016/j.cag.2025.104410","DOIUrl":"10.1016/j.cag.2025.104410","url":null,"abstract":"<div><div>Effectively analyzing spatiotemporal data plays a central role in understanding real-world phenomena and informing decision-making. Capturing the interaction between spatial and temporal dimensions also helps explain the underlying structure of the data. However, most datasets do not reveal attribute relationships, requiring additional algorithms to extract meaningful patterns. Existing visualization tools often focus either on attribute relationships or spatiotemporal analysis, but rarely support both simultaneously. In this paper, we present <em>STRive</em> (SpatioTemporal Rule Interactive Visual Explorer), a visual analytics system that enables users to uncover and explore spatial and temporal patterns in data. At the core of <em>STRive</em> lies Association Rule Mining (ARM), which we apply to spatiotemporal datasets to generate interpretable and actionable insights. We combine ARM with multiple interactive mechanisms to analyze the extracted relationships. Association rules serve as interpretable guidance mechanisms for visual analytics by highlighting the meaningful aspects of the data that users should investigate. Our methodology includes three key steps: rule generation, rule clustering, and interactive visualization. <em>STRive</em> offers two modes of analysis. The first operates at the rule cluster level and includes four coordinated views, each showing a different facet of a cluster, including its temporal and spatial behavior. The second mode mirrors the first but focuses on individual rules within a selected cluster. We evaluate the effectiveness of <em>STRive</em> through two case studies involving real-world datasets — fatal vehicle accidents and urban crime. Results demonstrate the system’s ability to support the discovery and analysis of interpretable patterns in complex spatiotemporal contexts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104410"},"PeriodicalIF":2.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning for brain electron microscopy segmentation: Advances, challenges, and future directions in connectomics and ultrastructure analysis 脑电子显微镜分割的深度学习:连接组学和超微结构分析的进展、挑战和未来方向
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-09-01 DOI: 10.1016/j.cag.2025.104391
Uzair Shah , Mahmood Alzubaidi , Marco Agus , Corrado Calí , Pierre J. Magistretti , Mowafa Househ
{"title":"Deep learning for brain electron microscopy segmentation: Advances, challenges, and future directions in connectomics and ultrastructure analysis","authors":"Uzair Shah ,&nbsp;Mahmood Alzubaidi ,&nbsp;Marco Agus ,&nbsp;Corrado Calí ,&nbsp;Pierre J. Magistretti ,&nbsp;Mowafa Househ","doi":"10.1016/j.cag.2025.104391","DOIUrl":"10.1016/j.cag.2025.104391","url":null,"abstract":"<div><div>This systematic review and meta-analysis comprehensively analyzes deep learning approaches for brain electron microscopy (EM) segmentation, addressing the critical challenge of extracting neuroanatomical information at nanometer resolution. Following PRISMA guidelines, we identified 60 studies through structured database searches, with quantitative meta-analysis of 27 studies (46 experiments) across 10 datasets providing the first unified benchmark comparison in this domain. Our analysis reveals a field transitioning from traditional CNN approaches toward foundation models and hybrid architectures. The meta-analysis demonstrates that foundation models outperform traditional CNNs by 13%–35% across key metrics, with the 3D Transformer + U-Net achieving the highest composite score (0.954) across five datasets. Meta-analysis confirms significant advantages for foundation models in instance-based metrics (Cohen’s d <span><math><mrow><mo>=</mo><mo>−</mo><mn>6</mn><mo>.</mo><mn>44</mn></mrow></math></span>), while only 26% of experiments validate across multiple datasets. Four key evolutionary trends emerge: (1) transition from 2D to 3D architectures optimized for ultrastructural complexity; (2) development of topology-preserving loss functions and evaluation metrics (clDice, ERL) that prioritize neural connectivity over pixel-wise accuracy; (3) emergence of self-supervised and foundation model adaptation techniques reducing annotation dependency; and (4) evolution toward specialized architectures capturing long-range dependencies critical for neural structures. Performance analysis reveals that mitochondria segmentation achieves highest accuracy (Jaccard scores 87.2–90.5%), while computational requirements vary from single-GPU implementations to distributed systems with 48 GPUs for teravoxel-scale volumes. Despite progress, reproducibility challenges persist with only 54% of studies providing public code repositories. These advances drive innovation in 3D computer vision, establish new benchmarks for volumetric instance segmentation, and address fundamental challenges in processing massive biological datasets. Our unified benchmarks and comprehensive analysis provide a foundation for systematic progress tracking and evidence-based method selection, positioning brain EM segmentation to enable large-scale connectomics studies and detailed neuroanatomical mapping across scales.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104391"},"PeriodicalIF":2.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TransportMap: Visual transport analysis for spatiotemporal data without trajectory information TransportMap:无轨迹信息的时空数据可视化运输分析
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-31 DOI: 10.1016/j.cag.2025.104387
Jiazhi Xia , Xin Zhao , Kang Xie , Yangbo Hou , Xiaolong (Luke) Zhang , Xiaoyan Kui , Ying Zhao , Chenhui Li , Hongxing Qin
{"title":"TransportMap: Visual transport analysis for spatiotemporal data without trajectory information","authors":"Jiazhi Xia ,&nbsp;Xin Zhao ,&nbsp;Kang Xie ,&nbsp;Yangbo Hou ,&nbsp;Xiaolong (Luke) Zhang ,&nbsp;Xiaoyan Kui ,&nbsp;Ying Zhao ,&nbsp;Chenhui Li ,&nbsp;Hongxing Qin","doi":"10.1016/j.cag.2025.104387","DOIUrl":"10.1016/j.cag.2025.104387","url":null,"abstract":"<div><div>It is essential to understand movements in exploring spatiotemporal data. However, many datasets have no explicit trajectory or origin–destination information, making movement analysis an ill-posed problem. Existing methods struggle to effectively simulate the complete movement process, producing results that are infeasible in real-world scenarios and neglecting potential environmental factors. To address these challenges, we propose TransportMap, a novel approach that extracts movements from spatiotemporal data without trajectory information. TransportMap employs a two-step optimal transport algorithm, which is integrated into a visual analysis system that enables interactive adjustment of environmental factors, improving adaptability to complex settings. The resulting movement interpolations are visualized using density maps and vector fields. Quantitative experiments demonstrate that TransportMap outperforms existing methods. Additionally, three real-world case studies validate the effectiveness of our approach in exploring spatiotemporal data with or without user steering.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104387"},"PeriodicalIF":2.8,"publicationDate":"2025-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometry-aware triplane diffusion for single shape generation with feature alignment 具有特征对齐的单形状生成的几何感知三平面扩散
IF 2.8 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-08-30 DOI: 10.1016/j.cag.2025.104384
HongLiang Weng, Qinghai Zheng, Yuanlong Yu, Yixin Zhuang
{"title":"Geometry-aware triplane diffusion for single shape generation with feature alignment","authors":"HongLiang Weng,&nbsp;Qinghai Zheng,&nbsp;Yuanlong Yu,&nbsp;Yixin Zhuang","doi":"10.1016/j.cag.2025.104384","DOIUrl":"10.1016/j.cag.2025.104384","url":null,"abstract":"<div><div>We tackle the problem of single-shape 3D generation, aiming to synthesize diverse and plausible shapes conditioned on a single input exemplar. This task is challenging due to the absence of dataset-level variation, requiring models to internalize structural patterns and generate novel shapes from limited local geometric cues. To address this, we propose a unified framework combining geometry-aware representation learning with a multiscale diffusion process. Our approach centers on a triplane autoencoder enhanced with a spatial pattern predictor and attention-based feature fusion, enabling fine-grained perception of local structures. To preserve structural coherence during generation, we introduce a soft feature distribution alignment loss that aligns features between input and generated shapes, balancing fidelity and diversity. Finally, we adopt a hierarchical diffusion strategy that progressively refines triplane features from coarse to fine, stabilizing training and improving quality. Extensive experiments demonstrate that our method produces high-fidelity, structurally consistent, and diverse shapes, establishing a strong baseline for single-shape generation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"132 ","pages":"Article 104384"},"PeriodicalIF":2.8,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144919869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信