ACM Transactions on Graphics最新文献

筛选
英文 中文
HoloChrome: Polychromatic Illumination for Speckle Reduction in Holographic Near-Eye Displays HoloChrome:用于全息近眼显示斑点减少的多色照明
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-28 DOI: 10.1145/3732935
Florian Andreas Schiffers, Grace Kuo, Nathan Matsuda, Douglas Lanman, Oliver Cossairt
{"title":"HoloChrome: Polychromatic Illumination for Speckle Reduction in Holographic Near-Eye Displays","authors":"Florian Andreas Schiffers, Grace Kuo, Nathan Matsuda, Douglas Lanman, Oliver Cossairt","doi":"10.1145/3732935","DOIUrl":"https://doi.org/10.1145/3732935","url":null,"abstract":"Holographic displays hold the promise of providing authentic depth cues, resulting in enhanced immersive visual experiences for near-eye applications. However, current holographic displays are hindered by speckle noise, which limits accurate reproduction of color and texture in displayed images. We present HoloChrome, a polychromatic holographic display framework designed to mitigate these limitations. HoloChrome utilizes an ultrafast, wavelength-adjustable laser and a dual-Spatial Light Modulator (SLM) architecture, enabling the multiplexing of a large set of discrete wavelengths across the visible spectrum. By leveraging spatial separation in our dual-SLM setup, we independently manipulate speckle patterns across multiple wavelengths. This novel approach effectively reduces speckle noise through incoherent averaging achieved by wavelength multiplexing, specifically by using a single SLM pattern to modulate multiple wavelengths simultaneously on one or more SLM devices. Our method is complementary to existing speckle reduction techniques, offering a new pathway to address this challenge. Furthermore, the use of polychromatic illumination broadens the achievable color gamut compared to traditional three-color primary holographic displays. Our simulations and tabletop experiments validate that HoloChrome significantly reduces speckle noise and expands the color gamut. These advancements enhance the performance of holographic near-eye displays, moving us closer to practical, immersive next-generation visual experiences.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"45 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143884855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StructRe : Rewriting for Structured Shape Modeling StructRe:重写结构化形状建模
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-28 DOI: 10.1145/3732934
Jiepeng Wang, Hao Pan, Yang Liu, Xin Tong, Taku Komura, Wenping Wang
{"title":"StructRe : Rewriting for Structured Shape Modeling","authors":"Jiepeng Wang, Hao Pan, Yang Liu, Xin Tong, Taku Komura, Wenping Wang","doi":"10.1145/3732934","DOIUrl":"https://doi.org/10.1145/3732934","url":null,"abstract":"Man-made 3D shapes are naturally organized in parts and hierarchies; such structures provide important constraints for shape reconstruction and generation. Modeling shape structures is difficult, because there can be multiple hierarchies for a given shape, causing ambiguity, and across different categories the shape structures are correlated with semantics, limiting generalization. We present <jats:italic>StructRe</jats:italic> , a structure rewriting system, as a novel approach to structured shape modeling. Given a 3D object represented by points and components, <jats:italic>StructRe</jats:italic> can rewrite it upward into more concise structures, or downward into more detailed structures; by iterating the rewriting process, hierarchies are obtained. Such a localized rewriting process enables probabilistic modeling of ambiguous structures and robust generalization across object categories. We train <jats:italic>StructRe</jats:italic> on PartNet data and show its generalization to cross-category and multiple object hierarchies, and test its extension to ShapeNet. We also demonstrate the benefits of probabilistic and generalizable structure modeling for shape reconstruction, generation and editing tasks.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"27 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143884860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Policy-Space Diffusion for Physics-Based Character Animation 基于物理的角色动画策略空间扩散
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-25 DOI: 10.1145/3732285
Michele Rocca, Sune Darkner, Kenny Erleben, Sheldon Andrews
{"title":"Policy-Space Diffusion for Physics-Based Character Animation","authors":"Michele Rocca, Sune Darkner, Kenny Erleben, Sheldon Andrews","doi":"10.1145/3732285","DOIUrl":"https://doi.org/10.1145/3732285","url":null,"abstract":"Adapting motion to new contexts in digital entertainment often demands fast agile prototyping. State-of-the-art techniques use reinforcement learning policies for simulating the underlined motion in a physics engine. Unfortunately, policies typically fail on unseen tasks and it is too time-consuming to fine-tune the policy for every new morphological, environmental, or motion change. We propose a novel point of view on using policy networks as a representation of motion for physics-based character animation. Our policies are compact, tailored to individual motion tasks, and preserve similarity with nearby tasks. This allows us to view the space of all motions as a manifold of policies where sampling substitutes training. We obtain memory-efficient encoding of motion that leverages the characteristics of control policies such as being generative, and robust to small environmental changes. With this perspective, we can sample novel motions by directly manipulating weights and biases through a Diffusion Model. Our newly generated policies can adapt to previously unseen characters, potentially saving time in rapid prototyping scenarios. Our contributions include the introduction of Common Neighbor Policy regularization to constrain policy similarity during motion imitation training making them suitable for generative modeling; a Diffusion Model adaptation for diverse morphology; and an open policy dataset. The results show that we can learn non-linear transformations in the policy space from labeled examples, and conditionally generate new ones. In a matter of seconds, we sample a batch of policies for different conditions that show comparable motion fidelity metrics as their respective trained ones.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"6 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Neural Particle Level Set Method for Dynamic Interface Tracking 动态界面跟踪的神经粒子水平集方法
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-21 DOI: 10.1145/3730399
Duowen Chen, Junwei Zhou, Bo Zhu
{"title":"A Neural Particle Level Set Method for Dynamic Interface Tracking","authors":"Duowen Chen, Junwei Zhou, Bo Zhu","doi":"10.1145/3730399","DOIUrl":"https://doi.org/10.1145/3730399","url":null,"abstract":"We propose a neural particle level set (Neural PLS) method to accommodate tracking and evolving dynamic neural representations. At the heart of our approach is a set of oriented particles serving dual roles of interface trackers and sampling seeders. These dynamic particles are used to evolve the interface and construct neural representations on a multi-resolution grid-hash structure to hybridize coarse sparse distance fields and multi-scale feature encoding. Based on these parallel implementations and neural-network-friendly architectures, our neural particle level set method combines the computational merits on both ends of the traditional particle level sets and the modern implicit neural representations, in terms of feature representation and dynamic tracking. We demonstrate the efficacy of our approach by showcasing its performance surpassing traditional level-set methods in both benchmark tests and physical simulations.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"28 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143853216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoFlow: Motion-Guided Flows for Recurrent Rendered Frame Prediction MoFlow:用于循环渲染帧预测的运动引导流
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-18 DOI: 10.1145/3730400
Zhizhen Wu, Zhilong Yuan, Chenyu Zuo, Yazhen Yuan, Yifan PENG, Guiyang Pu, Rui Wang, Yuchi Huo
{"title":"MoFlow: Motion-Guided Flows for Recurrent Rendered Frame Prediction","authors":"Zhizhen Wu, Zhilong Yuan, Chenyu Zuo, Yazhen Yuan, Yifan PENG, Guiyang Pu, Rui Wang, Yuchi Huo","doi":"10.1145/3730400","DOIUrl":"https://doi.org/10.1145/3730400","url":null,"abstract":"Rendering realistic images in real-time on high-frame-rate display devices poses considerable challenges, even with advanced graphics cards. This stimulates a demand for frame prediction technologies to boost frame rates. The key to these algorithms is to exploit spatiotemporal coherence by warping rendered pixels with motion representations. However, existing motion estimation methods can suffer from low precision, high overhead, and incomplete support for visual effects. In this article, we present a rendered frame prediction framework with a novel motion representation, dubbed <jats:italic>motion-guided flow (MoFlow)</jats:italic> , aiming to overcome the intrinsic limitations of optical flow and motion vectors and precisely capture the dynamics of intricate geometries, lighting, and translucent objects. Notably, we construct MoFlows using a recurrent feature streaming network, which specializes in learning latent motion features from multiple frames. The results of extensive experiments demonstrate that, compared to state-of-the-art methods, our method achieves superior visual quality and temporal stability with lower latency. The recurrent mechanism allows our method to predict single or multiple consecutive frames, increasing the frame rate by over 2 ×. The proposed approach represents a flexible pipeline to meet the demands of various graphics applications, devices, and scenarios.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patch-Grid : An Efficient and Feature-Preserving Neural Implicit Surface Representation 斑块-网格:高效且保留特征的神经隐式表面表示法
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-08 DOI: 10.1145/3727142
Guying Lin, Lei Yang, Congyi Zhang, Hao Pan, Yuhan Ping, Guodong Wei, Taku Komura, John Keyser, Wenping Wang
{"title":"Patch-Grid : An Efficient and Feature-Preserving Neural Implicit Surface Representation","authors":"Guying Lin, Lei Yang, Congyi Zhang, Hao Pan, Yuhan Ping, Guodong Wei, Taku Komura, John Keyser, Wenping Wang","doi":"10.1145/3727142","DOIUrl":"https://doi.org/10.1145/3727142","url":null,"abstract":"Neural implicit representations are increasingly used to depict 3D shapes owing to their inherent smoothness and compactness, contrasting with traditional discrete representations. Yet, the multilayer perceptron (MLP) based neural representation, because of its smooth nature, rounds sharp corners or edges, rendering it unsuitable for representing objects with sharp features like CAD models. Moreover, neural implicit representations need long training times to fit 3D shapes. While previous works address these issues separately, we present a unified neural implicit representation called <jats:italic>Patch-Grid</jats:italic> , which efficiently fits complex shapes, preserves sharp features delineating different patches, and can also represent surfaces with open boundaries and thin geometric features. <jats:italic>Patch-Grid</jats:italic> learns a signed distance field (SDF) to approximate an encompassing surface patch of the shape with a learnable patch feature volume. To form sharp edges and corners in a CAD model, <jats:italic>Patch-Grid</jats:italic> merges the learned SDFs via the constructive solid geometry (CSG) approach. Core to the merging process is a novel <jats:italic>merge grid</jats:italic> design that organizes different patch feature volumes in a common octree structure. This design choice ensures robust merging of multiple learned SDFs by confining the CSG operations to localized regions. Additionally, it drastically reduces the complexity of the CSG operations in each merging cell, allowing the proposed method to be trained in seconds to fit a complex shape at high fidelity. Experimental results demonstrate that the proposed <jats:italic>Patch-Grid</jats:italic> representation is capable of accurately reconstructing shapes with complex sharp features, open boundaries, and thin geometric elements, achieving state-of-the-art reconstruction quality with high computational efficiency within seconds.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"183 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
B4M : B reaking Low-Rank Adapter for M aking Content-Style Customization B4M:用于制作内容风格定制的低级别适配器
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-05 DOI: 10.1145/3728461
Yu Xu, Fan Tang, Juan Cao, Yuxin Zhang, Oliver Deussen, Weiming Dong, Jintao Li, Tong-Yee Lee
{"title":"B4M : B reaking Low-Rank Adapter for M aking Content-Style Customization","authors":"Yu Xu, Fan Tang, Juan Cao, Yuxin Zhang, Oliver Deussen, Weiming Dong, Jintao Li, Tong-Yee Lee","doi":"10.1145/3728461","DOIUrl":"https://doi.org/10.1145/3728461","url":null,"abstract":"Personalized generation paradigms empower designers to customize visual intellectual properties with the help of textual descriptions by adapting pre-trained text-to-image models on a few images. Recent studies focus on simultaneously customizing content and detailed visual style in images but often struggle with entangling the two. In this study, we reconsider the customization of content and style concepts from the perspective of parameter space construction. Unlike existing methods that utilize a shared parameter space for content and style learning, we propose a novel framework that separates the parameter space to facilitate individual learning of content and style by introducing “partly learnable projection” ( PLP ) matrices to separate the original adapters into divided sub-parameter spaces. A “ break-for-make ” customization learning pipeline based on PLP is proposed: we first break the original adapters into “up projection” and “down projection” for content and style concept under orthogonal prior and then make the entity parameter space by reconstructing the content and style PLPs matrices by using Riemannian precondition to adaptively balance content and style learning. Experiments on various styles, including textures, materials, and artistic style, show that our method outperforms state-of-the-art single/multiple concept learning pipelines regarding content-style-prompt alignment. Code is available at: https://github.com/ICTMCG/Break-for-make.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"79 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143782862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neurally Integrated Finite Elements for Differentiable Elasticity on Evolving Domains 演化域上可微弹性的神经积分有限元
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-02 DOI: 10.1145/3727874
Gilles Daviet, Tianchang Shen, Nicholas Sharp, David I.W. Levin
{"title":"Neurally Integrated Finite Elements for Differentiable Elasticity on Evolving Domains","authors":"Gilles Daviet, Tianchang Shen, Nicholas Sharp, David I.W. Levin","doi":"10.1145/3727874","DOIUrl":"https://doi.org/10.1145/3727874","url":null,"abstract":"We present an elastic simulator for domains defined as evolving implicit functions, which is efficient, robust, and differentiable with respect to both shape and material. This simulator is motivated by applications in 3D reconstruction: it is increasingly effective to recover geometry from observed images as implicit functions, but physical applications require accurately simulating and optimizing-for the behavior of such shapes under deformation, which has remained challenging. Our key technical innovation is to train a small neural network to fit quadrature points for robust numerical integration on implicit grid cells. When coupled with a Mixed Finite Element formulation, this yields a smooth, fully differentiable simulation model connecting the evolution of the underlying implicit surface to its elastic response. We demonstrate the efficacy of our approach on forward simulation of implicits, direct simulation of 3D shapes during editing, and novel physics-based shape and topology optimizations in conjunction with differentiable rendering.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"103 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusing Winding Gradients (DWG): A Parallel and Scalable Method for 3D Reconstruction from Unoriented Point Clouds 扩散缠绕梯度(DWG):一种从无方向点云进行三维重建的并行和可扩展方法
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-04-01 DOI: 10.1145/3727873
Weizhou Liu, Jiaze Li, Xuhui Chen, Fei Hou, Shiqing Xin, Xingce Wang, Zhongke Wu, Chen Qian, Ying He
{"title":"Diffusing Winding Gradients (DWG): A Parallel and Scalable Method for 3D Reconstruction from Unoriented Point Clouds","authors":"Weizhou Liu, Jiaze Li, Xuhui Chen, Fei Hou, Shiqing Xin, Xingce Wang, Zhongke Wu, Chen Qian, Ying He","doi":"10.1145/3727873","DOIUrl":"https://doi.org/10.1145/3727873","url":null,"abstract":"This paper presents Diffusing Winding Gradients (DWG) for reconstructing watertight surfaces from unoriented point clouds. Our method exploits the alignment between the gradients of screened generalized winding number (GWN) field–a robust variant of the standard GWN field– and globally consistent normals to orient points. Starting with an unoriented point cloud, DWG initially assigns a random normal to each point. It computes the corresponding sGWN field and extract a level set whose iso-value is the average GWN values across all input points. The gradients of this level set are then utilized to update the point normals. This cycle of recomputing the sGWN field and updating point normals is repeated until the sGWN level sets stabilize and their gradients cease to change. Unlike conventional methods, DWG does not rely on solving linear systems or optimizing objective functions, which simplifies its implementation and enhances its suitability for efficient parallel execution. Experimental results demonstrate that DWG significantly outperforms existing methods in terms of runtime performance. For large-scale models with 10 to 20 million points, our CUDA implementation on an NVIDIA GTX 4090 GPU achieves speeds 30-120 times faster than iPSR, the leading sequential method, tested on a high-end PC with an Intel i9 CPU. Furthermore, by employing a screened variant of GWN, DWG demonstrates enhanced robustness against noise and outliers, and proves effective for models with thin structures and real-world inputs with overlapping and misaligned scans. For source code and additional results, visit our project webpage: https://dwgtech.github.io/.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"50 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Determination and Computation of Self-intersections for NURBS Surfaces NURBS曲面自交的快速确定与计算
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2025-03-31 DOI: 10.1145/3727620
Kai Li, Xiaohong Jia, Falai Chen
{"title":"Fast Determination and Computation of Self-intersections for NURBS Surfaces","authors":"Kai Li, Xiaohong Jia, Falai Chen","doi":"10.1145/3727620","DOIUrl":"https://doi.org/10.1145/3727620","url":null,"abstract":"Self-intersections of NURBS surfaces are unavoidable during the CAD modeling process, especially in operations such as offset or sweeping. The existence of self-intersections might cause problems in the latter simulation and manufacturing process. Therefore, fast detection of self-intersections of NURBS is highly demanded in industrial applications. Self-intersections are essentially singular points on the surface. Although there is a long history of exploring singular points in mathematics community, the fast and robust determination and computation of self-intersections have been a challenging problem in practice. In this paper, we construct an algebraic signature whose non-negativity is proven to be sufficient for excluding the existence of self-intersections from a global perspective. An efficient algorithm for determining the existence of self-intersections is provided by recursively using this signature. Once the self-intersection is detected, if necessary, the self-intersection locus can also be computed via a further recursively cross-use of this signature and the surface-surface intersection function. Various experiments and comparisons with existing methods, as well as geometry kernels, including OCCT and ACIS, validate the robustness and efficiency of our algorithm. We also adapt our algorithm to self-intersection elimination, self-intersection trimming, and applications in mesh generation, boolean operation, and shelling.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"57 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信