Graphical Models最新文献

筛选
英文 中文
DP-Adapter: Dual-pathway adapter for boosting fidelity and text consistency in customizable human image generation DP-Adapter:用于在可定制的人类图像生成中提高保真度和文本一致性的双路径适配器
IF 2.2 4区 计算机科学
Graphical Models Pub Date : 2025-08-15 DOI: 10.1016/j.gmod.2025.101292
Ye Wang , Ruiqi Liu , Xuping Xie , Lanjun Wang , Zili Yi , Rui Ma
{"title":"DP-Adapter: Dual-pathway adapter for boosting fidelity and text consistency in customizable human image generation","authors":"Ye Wang ,&nbsp;Ruiqi Liu ,&nbsp;Xuping Xie ,&nbsp;Lanjun Wang ,&nbsp;Zili Yi ,&nbsp;Rui Ma","doi":"10.1016/j.gmod.2025.101292","DOIUrl":"10.1016/j.gmod.2025.101292","url":null,"abstract":"<div><div>With the growing popularity of personalized human content creation and sharing, there is a rising demand for advanced techniques in customized human image generation. However, current methods struggle to simultaneously maintain the fidelity of human identity and ensure the consistency of textual prompts, often resulting in suboptimal outcomes. This shortcoming is primarily due to the lack of effective constraints during the simultaneous integration of visual and textual prompts, leading to unhealthy mutual interference that compromises the full expression of both types of input. Building on prior research that suggests visual and textual conditions influence different regions of an image in distinct ways, we introduce a novel Dual-Pathway Adapter (DP-Adapter) to enhance both high-fidelity identity preservation and textual consistency in personalized human image generation. Our approach begins by decoupling the target human image into visually sensitive and text-sensitive regions. For visually sensitive regions, DP-Adapter employs an Identity-Enhancing Adapter (IEA) to preserve detailed identity features. For text-sensitive regions, we introduce a Textual-Consistency Adapter (TCA) to minimize visual interference and ensure the consistency of textual semantics. To seamlessly integrate these pathways, we develop a Fine-Grained Feature-Level Blending (FFB) module that efficiently combines hierarchical semantic features from both pathways, resulting in more natural and coherent synthesis outcomes. Additionally, DP-Adapter supports various innovative applications, including controllable headshot-to-full-body portrait generation, age editing, old-photo to reality, and expression editing. Extensive experiments demonstrate that DP-Adapter outperforms state-of-the-art methods in both visual fidelity and text consistency, highlighting its effectiveness and versatility in the field of human image generation.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101292"},"PeriodicalIF":2.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144842158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time neural soft shadow synthesis from hard shadows 实时神经软阴影从硬阴影合成
IF 2.2 4区 计算机科学
Graphical Models Pub Date : 2025-08-14 DOI: 10.1016/j.gmod.2025.101294
Ran Chen , Xiang Xu , KaiYao Ge , Yanning Xu , Xiangxu Meng , Lu Wang
{"title":"Real-time neural soft shadow synthesis from hard shadows","authors":"Ran Chen ,&nbsp;Xiang Xu ,&nbsp;KaiYao Ge ,&nbsp;Yanning Xu ,&nbsp;Xiangxu Meng ,&nbsp;Lu Wang","doi":"10.1016/j.gmod.2025.101294","DOIUrl":"10.1016/j.gmod.2025.101294","url":null,"abstract":"<div><div>Soft shadows play a crucial role in enhancing visual realism in real-time rendering. Although traditional shadow mapping techniques offer high efficiency, they often suffer from artifacts and limited quality. In contrast, ray tracing can produce high-fidelity soft shadows but incurs substantial computational cost. In this paper, we propose a general-purpose, real-time soft shadow generation method based on neural networks. To encode shadow geometry, we employ the hard shadows via shadow mapping as input to our network, which effectively captures the spatial layout of shadow positions and contours. A lightweight U-Net architecture then refines this input to synthesize high-quality soft shadows in real time. The generated shadows closely approximate ray-traced references in visual fidelity. Compared to existing learning-based methods, our approach produces higher-quality soft shadows and offers improved generalization across diverse scenes. Furthermore, it requires no scene-specific precomputation, making it directly applicable to practical real-time rendering scenarios.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101294"},"PeriodicalIF":2.2,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144842156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature line extraction based on winding number 基于圈数的特征线提取
IF 2.2 4区 计算机科学
Graphical Models Pub Date : 2025-08-13 DOI: 10.1016/j.gmod.2025.101296
Shuxian Cai , Juan Cao , Bailin Deng , Zhonggui Chen
{"title":"Feature line extraction based on winding number","authors":"Shuxian Cai ,&nbsp;Juan Cao ,&nbsp;Bailin Deng ,&nbsp;Zhonggui Chen","doi":"10.1016/j.gmod.2025.101296","DOIUrl":"10.1016/j.gmod.2025.101296","url":null,"abstract":"<div><div>Sharp feature lines provide critical structural information in 3D models and are essential for geometric processing. However, the performance of existing algorithms for extracting feature lines from point clouds remains sensitive to the quality of the input data. This paper introduces an algorithm specifically designed to extract feature lines from 3D point clouds. The algorithm calculates the winding number for each point and uses variations in this number within edge regions to identify feature points. These feature points are then mapped onto a cuboid structure to obtain key feature points and capture neighboring relationships. Finally, feature lines are fitted based on the connectivity of key feature points. Extensive experiments demonstrate that this algorithm not only accurately detects feature points on potential sharp edges, but also outperforms existing methods in extracting subtle feature lines and handling complex point clouds.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101296"},"PeriodicalIF":2.2,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU-accelerated rendering of vector strokes with piecewise quadratic approximation gpu加速绘制的矢量笔画与分段二次逼近
IF 2.2 4区 计算机科学
Graphical Models Pub Date : 2025-08-13 DOI: 10.1016/j.gmod.2025.101295
Xuhai Chen , Guangze Zhang , Wanyi Wang , Juan Cao , Zhonggui Chen
{"title":"GPU-accelerated rendering of vector strokes with piecewise quadratic approximation","authors":"Xuhai Chen ,&nbsp;Guangze Zhang ,&nbsp;Wanyi Wang ,&nbsp;Juan Cao ,&nbsp;Zhonggui Chen","doi":"10.1016/j.gmod.2025.101295","DOIUrl":"10.1016/j.gmod.2025.101295","url":null,"abstract":"<div><div>Vector graphics are widely used in areas such as logo design and digital painting, including both stroked and filled paths as primitives. GPU-based rendering for filled paths already has well-established solutions. Due to the complexity of stroked paths, existing methods often render them by approximating strokes with filled shapes. However, the performance of existing methods still leaves room for improvement. This paper designs a GPU-accelerated rendering algorithm along with a curvature-guided parallel adaptive subdivision method to accurately and efficiently render stroke areas. Additionally, we propose an efficient Newton iteration-based method for arc-length parameterization of quadratic curves, along with an error estimation technique. This enables a parallel rendering approach for dashed stroke styles and arc-length guided texture filling. Experimental results show that our method achieves average speedups of <span><math><mrow><mn>3</mn><mo>.</mo><mn>4</mn><mo>×</mo></mrow></math></span> for rendering quadratic stroked paths and <span><math><mrow><mn>2</mn><mo>.</mo><mn>5</mn><mo>×</mo></mrow></math></span> for rendering quadratic dashed strokes, compared to the best existing approaches.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101295"},"PeriodicalIF":2.2,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-Incremental Learning Paradigm for scene understanding via Pseudo-Replay Generation 通过伪回放生成的场景理解领域增量学习范式
IF 2.2 4区 计算机科学
Graphical Models Pub Date : 2025-08-11 DOI: 10.1016/j.gmod.2025.101290
Zhifeng Xie , Rui Qiu , Qile He , Mengtian Li , Xin Tan
{"title":"Domain-Incremental Learning Paradigm for scene understanding via Pseudo-Replay Generation","authors":"Zhifeng Xie ,&nbsp;Rui Qiu ,&nbsp;Qile He ,&nbsp;Mengtian Li ,&nbsp;Xin Tan","doi":"10.1016/j.gmod.2025.101290","DOIUrl":"10.1016/j.gmod.2025.101290","url":null,"abstract":"<div><div>Scene understanding is a computer vision task that involves grasping the pixel-level distribution of objects. Unlike most research focuses on single-scene models, we consider a more versatile proposal: domain-incremental learning for scene understanding. This allows us to adapt well-studied single-scene models into multi-scene models, reducing data requirements and ensuring model flexibility. However, domain-incremental learning that leverages correlations between scene domains has yet to be explored. To address this challenge, we propose a Domain-Incremental Learning Paradigm (D-ILP) for scene understanding, along with a new strategy of Pseudo-Replay Generation (PRG) that does not require manual labeling. Specifically, D-ILP leverages pre-trained single-scene models and incremental images for supervised training to acquire new knowledge from other scenes. As a pre-trained generation model, PRG can controllably generate pseudo-replays resembling source images from incremental images and text prompts. These pseudo-replays are utilized to minimize catastrophic forgetting in the original scene. We perform experiments with three publicly accessible models: Mask2Former, Segformer, and DeepLabv3+. With successfully transforming these single-scene models into multi-scene models, we achieve high-quality parsing results for original and new scenes simultaneously. Meanwhile, the validity and rationality of our method are proved by the analysis of D-ILP.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101290"},"PeriodicalIF":2.2,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Position-free multiple-scattering computations for micrograin BSDF model 微颗粒BSDF模型的无位置多次散射计算
IF 2.2 4区 计算机科学
Graphical Models Pub Date : 2025-08-05 DOI: 10.1016/j.gmod.2025.101288
Fangfang Zhou , Haiyu Shen , Mingzhen Li , Ying Zhao , Chongke Bi
{"title":"Position-free multiple-scattering computations for micrograin BSDF model","authors":"Fangfang Zhou ,&nbsp;Haiyu Shen ,&nbsp;Mingzhen Li ,&nbsp;Ying Zhao ,&nbsp;Chongke Bi","doi":"10.1016/j.gmod.2025.101288","DOIUrl":"10.1016/j.gmod.2025.101288","url":null,"abstract":"<div><div>Porous materials (e.g., weathered stone, industrial coatings) exhibit complex optical effects due to their micrograin and pore structures, posing challenges for photorealistic rendering. Explicit geometry models struggle to characterize their micrograin distributions at microscopic scales, while single-scattering microfacet model fails to accurately capture the multiple-scattering effects and causes energy non-conservation artifacts, manifesting as unrealistic luminance decay. We propose an enhanced micrograin BSDF model that accurately accounts for multiple scattering. First, we introduce a visible normal distribution function (VNDF) sampling method via rejection sampling. Building on VNDF sampling, we derive a position-free microsurface formulation incorporating both inter-micrograin and micrograin-to-base interactions. Furthermore, we propose a practical random walk method to simulate microsurface scattering, which accurately solves the derived formulation. Our micrograin BSDF model effectively eliminates the energy loss artifacts inherent in the previous model while significantly reducing noise, providing a physically accurate yet artistically controllable solution for rendering porous materials.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101288"},"PeriodicalIF":2.2,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Carvable packing of revolved 3D objects for subtractive manufacturing 用于减法制造的旋转三维物体的可切割包装
IF 2.2 4区 计算机科学
Graphical Models Pub Date : 2025-08-05 DOI: 10.1016/j.gmod.2025.101282
Chengdong Wei, Shuai Feng, Hao Xu, Qidong Zhang, Songyang Zhang, Zongzhen Li, Changhe Tu, Haisen Zhao
{"title":"Carvable packing of revolved 3D objects for subtractive manufacturing","authors":"Chengdong Wei,&nbsp;Shuai Feng,&nbsp;Hao Xu,&nbsp;Qidong Zhang,&nbsp;Songyang Zhang,&nbsp;Zongzhen Li,&nbsp;Changhe Tu,&nbsp;Haisen Zhao","doi":"10.1016/j.gmod.2025.101282","DOIUrl":"10.1016/j.gmod.2025.101282","url":null,"abstract":"<div><div>Revolved 3D objects are widely used in industrial, manufacturing, and artistic fields, with subtractive manufacturing being a common production method. A key preprocessing step is to maximize raw material utilization by generating as many rough-machined inputs as possible from a single stock piece, which poses a packing problem constrained by tool accessibility. The main challenge is integrating tool accessibility into packing. This paper introduces the carvable packing problem for revolved objects, a critical but under-researched area in subtractive manufacturing. We propose a new carvable coarsening hull and a packing strategy that uses beam search and a bottom-up placement method to position these hulls in the stock material. Our method was tested on diverse sets of revolved objects with different geometries, and physical tests were conducted on a 5-axis machining platform, proving its ability to enhance material use and manufacturability.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101282"},"PeriodicalIF":2.2,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144779312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TerraCraft: City-scale generative procedural modeling with natural languages TerraCraft:使用自然语言的城市规模生成过程建模
IF 2.2 4区 计算机科学
Graphical Models Pub Date : 2025-08-05 DOI: 10.1016/j.gmod.2025.101285
Zichen Xi , Zhihao Yao , Jiahui Huang , Zi-Qi Lu , Hongyu Yan , Tai-Jiang Mu , Zhigang Wang , Qun-Ce Xu
{"title":"TerraCraft: City-scale generative procedural modeling with natural languages","authors":"Zichen Xi ,&nbsp;Zhihao Yao ,&nbsp;Jiahui Huang ,&nbsp;Zi-Qi Lu ,&nbsp;Hongyu Yan ,&nbsp;Tai-Jiang Mu ,&nbsp;Zhigang Wang ,&nbsp;Qun-Ce Xu","doi":"10.1016/j.gmod.2025.101285","DOIUrl":"10.1016/j.gmod.2025.101285","url":null,"abstract":"<div><div>Automated generation of large-scale 3D scenes presents a significant challenge due to the resource-intensive training and datasets required. This is in sharp contrast to the 2D counterparts that have become readily available due to their superior speed and quality. However, prior work in 3D procedural modeling has demonstrated promise in generating high-quality assets using the combination of algorithms and user-defined rules. To leverage the best of both 2D generative models and procedural modeling tools, we present TerraCraft, a novel framework for generating geometrically high-quality 3D city-scale scenes. By utilizing Large Language Models (LLMs), TerraCraft can generate city-scale 3D scenes from natural text descriptions. With its intuitive operation and powerful capabilities, TerraCraft enables users to easily create geometrically high-quality scenes readily for various applications, such as virtual reality and game design. We validate TerraCraft’s effectiveness through extensive experiments and user studies, showing its superior performance compared to existing baselines.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101285"},"PeriodicalIF":2.2,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDD++: Exploiting Density map consistency for Deep Depth estimation in indoor environments dddd++:利用密度图一致性在室内环境中进行深度估计
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-07-22 DOI: 10.1016/j.gmod.2025.101281
Giovanni Pintore , Marco Agus , Alberto Signoroni , Enrico Gobbetti
{"title":"DDD++: Exploiting Density map consistency for Deep Depth estimation in indoor environments","authors":"Giovanni Pintore ,&nbsp;Marco Agus ,&nbsp;Alberto Signoroni ,&nbsp;Enrico Gobbetti","doi":"10.1016/j.gmod.2025.101281","DOIUrl":"10.1016/j.gmod.2025.101281","url":null,"abstract":"<div><div>We introduce a novel deep neural network designed for fast and structurally consistent monocular 360° depth estimation in indoor settings. Our model generates a spherical depth map from a single gravity-aligned or gravity-rectified equirectangular image, ensuring the predicted depth aligns with the typical depth distribution and structural features of cluttered indoor spaces, which are generally enclosed by walls, floors, and ceilings. By leveraging the distinctive vertical and horizontal patterns found in man-made indoor environments, we propose a streamlined network architecture that incorporates gravity-aligned feature flattening and specialized vision transformers. Through flattening, these transformers fully exploit the omnidirectional nature of the input without requiring patch segmentation or positional encoding. To further enhance structural consistency, we introduce a novel loss function that assesses density map consistency by projecting points from the predicted depth map onto a horizontal plane and a cylindrical proxy. This lightweight architecture requires fewer tunable parameters and computational resources than competing methods. Our comparative evaluation shows that our approach improves depth estimation accuracy while ensuring greater structural consistency compared to existing methods. For these reasons, it promises to be suitable for incorporation in real-time solutions, as well as a building block in more complex structural analysis and segmentation methods.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101281"},"PeriodicalIF":2.5,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse support path generation for multi-axis curved layer fused filament fabrication 多轴弯曲层熔丝制造的稀疏支撑路径生成
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2025-07-11 DOI: 10.1016/j.gmod.2025.101280
Tak Yu Lau , Dong He , Yamin Li , Yihe Wang , Danjie Bi , Lulu Huang , Pengcheng Hu , Kai Tang
{"title":"Sparse support path generation for multi-axis curved layer fused filament fabrication","authors":"Tak Yu Lau ,&nbsp;Dong He ,&nbsp;Yamin Li ,&nbsp;Yihe Wang ,&nbsp;Danjie Bi ,&nbsp;Lulu Huang ,&nbsp;Pengcheng Hu ,&nbsp;Kai Tang","doi":"10.1016/j.gmod.2025.101280","DOIUrl":"10.1016/j.gmod.2025.101280","url":null,"abstract":"<div><div>In recent years, multi-axis fused filament fabrication has emerged as a solution to address the limitations of the conventional 2.5D printing process. By using a curved layering strategy and varying the print direction, the final parts can be printed with reduced support structures, enhanced surface quality, and improved mechanical properties. However, support structures in the multi-axis scheme are still needed sometimes when the support-free requirement conflicts with other constraints. Currently, most support generation algorithms are for the conventional 2.5D printing, which are not applicable to multi-axis printing. To address this issue, we propose a sparse and curved support filling pattern for multi-axis printing, aiming at enhancing the material efficiency by fully utilizing the bridge technique. Firstly, the overhang regions are detected by identifying the overhang points given a multi-axis nozzle path. Then, an optimization framework for the support guide curve is proposed to minimize its total length while ensuring that overhang filaments can be stably supported. Lastly, the support layer slices and support segments that satisfy the self-supported criterion are generated for the final support printing paths. Simulation and experiments have been performed to validate the proposed methodology.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101280"},"PeriodicalIF":2.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144595734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信