Computers & Graphics-Uk最新文献

筛选
英文 中文
VITON-DRR: Details retention virtual try-on via non-rigid registration VITON-DRR:通过非刚性注册详细保留虚拟试戴
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-07-14 DOI: 10.1016/j.cag.2025.104288
Ben Li, Minqi Li, Jie Ren, Kaibing Zhang
{"title":"VITON-DRR: Details retention virtual try-on via non-rigid registration","authors":"Ben Li,&nbsp;Minqi Li,&nbsp;Jie Ren,&nbsp;Kaibing Zhang","doi":"10.1016/j.cag.2025.104288","DOIUrl":"10.1016/j.cag.2025.104288","url":null,"abstract":"<div><div>Image-based virtual try-on aims to fit a target garment to a specific person image and has attracted extensive research attention because of its huge application potential in the e-commerce and fashion industries. To generate high-quality try-on results, accurately warping the clothing item to fit the human body plays a significant role, as slight misalignment may lead to unrealistic artifacts in the fitting image. Most existing methods warp the clothing by feature matching and thin-plate spline (TPS). However, it often fails to preserve clothing details due to self-occlusion, severe misalignment between poses, etc. To address these challenges, this paper proposes a detail retention virtual try-on method via accurate non-rigid registration (VITON-DRR) for diverse human poses. Specifically, we reconstruct a human semantic segmentation using a dual-pyramid-structured feature extractor. Then, a novel Deformation Module is designed for extracting the cloth key points and warping them through an accurate non-rigid registration algorithm. Finally, the Image Synthesis Module is designed to synthesize the deformed garment image and generate the human pose information adaptively. Compared with traditional methods, the proposed VITON-DRR can make the deformation of fitting images more accurate and retain more garment details. The experimental results demonstrate that the proposed method performs better than state-of-the-art methods. Our code is publicly available at <span><span>https://github.com/minqili/VITON-DRR-main</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104288"},"PeriodicalIF":2.5,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperkinetic movement disorder analysis using multidimensional projections 使用多维投影的多动运动障碍分析
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-07-10 DOI: 10.1016/j.cag.2025.104276
Andressa Silva da Silva , Eduardo F. Ribeiro , Jelle R. Dalenberg , Alexandru C. Telea , Marina A.J. Tijssen , João Luiz Dihl Comba
{"title":"Hyperkinetic movement disorder analysis using multidimensional projections","authors":"Andressa Silva da Silva ,&nbsp;Eduardo F. Ribeiro ,&nbsp;Jelle R. Dalenberg ,&nbsp;Alexandru C. Telea ,&nbsp;Marina A.J. Tijssen ,&nbsp;João Luiz Dihl Comba","doi":"10.1016/j.cag.2025.104276","DOIUrl":"10.1016/j.cag.2025.104276","url":null,"abstract":"<div><div>Hyperkinetic movement disorders are a group of conditions characterized by involuntary movements such as tremors, sudden/uncontrollable jerks, abnormal postures, and random movements, which may have major impacts on the quality of life of individuals. The diagnosis of these disorders is often dependent on subjective clinical assessments, and there is a need for automatic methods that can support this diagnosis. Established clinical neurophysiological approaches use motion sensors to collect motion data from patients performing postural, action, or resting tasks to analyze and classify the types of disorders that affect patients. However, making sense of the high-dimensional space formed by patients, tasks, sensors, and disorders is challenging and time-consuming. In this paper, we propose a workflow to explore this space to select appropriate subsets of its data, transform it, and analyze it using multidimensional projections. We show how our workflow can lead to insights into the design of automated pipelines that automatically separate individuals with disorders from healthy individuals.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104276"},"PeriodicalIF":2.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highlights and Horizons: Notes on Computers & Graphics Issue 130 亮点和视野:计算机与图形注释第130期
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-07-10 DOI: 10.1016/j.cag.2025.104294
Joaquim Jorge (Editor-in-Chief)
{"title":"Highlights and Horizons: Notes on Computers & Graphics Issue 130","authors":"Joaquim Jorge (Editor-in-Chief)","doi":"10.1016/j.cag.2025.104294","DOIUrl":"10.1016/j.cag.2025.104294","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104294"},"PeriodicalIF":2.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144614326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuralux: Improving the decomposition of single-illumination multiview outdoor scenes Neuralux:改进单光照多视角户外场景的分解
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-07-10 DOI: 10.1016/j.cag.2025.104279
Mario Alfonso-Arsuaga , Andrea Castiella-Aguirrezabala , Jorge García-González , Jesús Bonilla , Jorge López Moreno
{"title":"Neuralux: Improving the decomposition of single-illumination multiview outdoor scenes","authors":"Mario Alfonso-Arsuaga ,&nbsp;Andrea Castiella-Aguirrezabala ,&nbsp;Jorge García-González ,&nbsp;Jesús Bonilla ,&nbsp;Jorge López Moreno","doi":"10.1016/j.cag.2025.104279","DOIUrl":"10.1016/j.cag.2025.104279","url":null,"abstract":"<div><div>We present a novel approach that combines intrinsic decomposition of outdoor scenes with real-time rendering of new views under unknown illumination. Building on top of the state of the art, our method tackles the challenges of limited information in single-illumination scenarios by introducing pixel-level regularization terms aligning inferred material segmentation labels with albedo consistency estimators. For outdoor illumination, we adopt a physically-based sky model which increases the intrinsic decomposition robustness by relying on a reduced set of expressive parameters. Our approach enables partial retraining of 2DGS/3DGS models to render de-illuminated scenes in real time, with seamless integration into rendering engines for enhanced scene lighting, achieving better decomposition results than the state of the art. We show several experiments, including ablation studies and material segmentation source comparisons, proving our method’s advantages over previous work, despite remaining challenges in handling fine shadow details and view-dependent effects due to the limitations of the Lambertian shading model.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104279"},"PeriodicalIF":2.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144656211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVITON : Consistency driven integration of TPS and flow for virtual tryon 一致性驱动的虚拟尝试的TPS和流集成
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-07-09 DOI: 10.1016/j.cag.2025.104289
Sanhita Pathak , Vinay Kaushik , Brejesh Lall
{"title":"COVITON : Consistency driven integration of TPS and flow for virtual tryon","authors":"Sanhita Pathak ,&nbsp;Vinay Kaushik ,&nbsp;Brejesh Lall","doi":"10.1016/j.cag.2025.104289","DOIUrl":"10.1016/j.cag.2025.104289","url":null,"abstract":"<div><div>Achieving realistic garment transfer while preserving both human and garment details, remains a challenging task in signal processing. The garment warping stage in virtual tryon plays a pivotal role in determining the visual fidelity of the final result. Existing methods commonly address this challenge by employing geometric transformations with control points using Thin-Plate Spline (TPS) or flow-based warping techniques. In this paper, we present an approach that jointly refines TPS and flow module’s tryon results utilizing a novel consistency constraint and fuses them through the integration of a (GFAM) Garment Fusion Attention Module. GFAM refines the flow warped garment using an attention based blending strategy that derives from TPS warped garment and integrates background person with the garment image to produce final tryon image. This not only preserves local and global textures but also achieves accurate garment deformation based on the target person’s pose. Our key innovation lies in the introduction of a novel Intra-Field Consistency loss, which ensures that the offset values computed by TPS and flow-based warping methods closely align with each other and GFAM block which facilitates a seamless garment and person fusion for realistic tryon generation. Importantly, our proposed framework represents the first attempt to manoeuvre the TPS module in a parser-free setting. Through extensive experiments and evaluations on VITON and VITON-HD datasets, we demonstrate the effectiveness of our method in achieving realistic and visually appealing state-of-the-art virtual tryon results.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104289"},"PeriodicalIF":2.5,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wiggle! Wiggle! Wiggle! Visualizing uncertainty in node attributes in straight-line node-link diagrams using animated wiggliness 摆动!摆动!摆动!利用动画摆动可视化直线节点链接图中节点属性的不确定性
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-07-08 DOI: 10.1016/j.cag.2025.104290
Henry Ehlers , Daniel Pahr , Sara di Bartolomeo , Velitchko Filipov , Hsiang-Yun Wu , Renata G. Raidou
{"title":"Wiggle! Wiggle! Wiggle! Visualizing uncertainty in node attributes in straight-line node-link diagrams using animated wiggliness","authors":"Henry Ehlers ,&nbsp;Daniel Pahr ,&nbsp;Sara di Bartolomeo ,&nbsp;Velitchko Filipov ,&nbsp;Hsiang-Yun Wu ,&nbsp;Renata G. Raidou","doi":"10.1016/j.cag.2025.104290","DOIUrl":"10.1016/j.cag.2025.104290","url":null,"abstract":"<div><div>Uncertainty is common to most types of data, from meteorology to the biomedical sciences. Here, we are interested in the visualization of uncertainty within the context of multivariate graphs, specifically the visualization of uncertainty attached to node attributes. Many visual channels offer themselves up for the visualization of node attributes and their uncertainty. One controversial and relatively under-explored channel, however, is animation, despite its conceptual advantages. In this paper, we investigate node “wiggliness”, i.e. uncertainty-dependent pseudo-random motion of nodes, as a potential new visual channel with which to communicate node attribute uncertainty. To study wiggliness’ effectiveness, we compare it against three other visual channels identified from a thorough review of uncertainty visualization literature—namely node enclosure, node fuzziness, and node color saturation. In a larger-scale, mixed method, <em>Prolific</em>-crowd-sourced, online user study of 160 participants, we quantitatively and qualitatively compare these four uncertainty encodings across eight low-level graph analysis tasks that probe participants’ abilities to parse the presented networks both on an attribute and topological level. We ultimately conclude that all four uncertainty encodings appear comparably useful—as opposed to previous findings. Wiggliness may be a suitable and effective visual channel with which to communicate node attribute uncertainty, at least for the kinds of data and tasks considered in our study.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104290"},"PeriodicalIF":2.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social crowd simulation: Improving realism with social rules and gaze behavior 社会人群模拟:通过社会规则和凝视行为改善现实性
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-07-07 DOI: 10.1016/j.cag.2025.104286
Reiya Itatani, Nuria Pelechano
{"title":"Social crowd simulation: Improving realism with social rules and gaze behavior","authors":"Reiya Itatani,&nbsp;Nuria Pelechano","doi":"10.1016/j.cag.2025.104286","DOIUrl":"10.1016/j.cag.2025.104286","url":null,"abstract":"<div><div>Current crowd simulation models focus mostly on steering towards a goal while avoiding collisions based on the agent’s direction of movement. This leads to robot-like simulations since agents’ appear to always have their attention perfectly aligned with the direction of movement. In the real world, we observe that humans move in a crowd performing collision avoidance driven by attention, gaze, and non-verbal coordination with incoming traffic. In addition, humans exhibit different steering strategies based on whether they walk alone or in a group, whether they can look ahead and plan their best local movement, or react more abruptly because their gaze diverts from their direction of movement. Human gaze can be driven by movement, but also by distractions such as being engaged in conversation with other people or using mobile phones. These human features are overlooked in crowd simulation, often leading to perfectly smooth local movements of individuals. Unfortunately, this lack of social behavior and variety in animations may be perceived as unrealistic when observing the results on a 2D display, and it may become even more apparent in immersive scenarios where the participant is at eye level with the virtual humans. This paper proposes a novel approach to enhance the realism of a rule-based crowd simulation model by incorporating social rules and gaze-driven attention with consistent animations. The ultimate goal is to make immersive virtual crowds more realistic. Our proposed method enhances existing crowd simulation frameworks by integrating social behavior models that affect both individual and collective dynamics, and drives gaze behavior to better simulate attention. We conducted validation user studies on both a 2D display and in immersive VR, and observed that applying these models to both the steering and animation levels significantly improves the realism of the crowd simulation. The 2D display based user study based on video comparisons showed that our model was perceived as more realistic and consistent with social behaviors compared to traditional collision avoidance approaches, that used only locomotion or random animations. The immersive user study showed that participants effectively detected the social behaviors included in our model as intended. The results revealed significant differences in the participants’ perceptions of the various behaviors exhibited by our social crowd model.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104286"},"PeriodicalIF":2.5,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144580000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of multi-prior 3D human reconstruction methods based on single-view images 基于单视图图像的多先验三维人体重建方法优化
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-07-01 DOI: 10.1016/j.cag.2025.104287
Jing Zhao, Pei Zhang, Yuqi Xue, Shida Gao, Yong Tang
{"title":"Optimization of multi-prior 3D human reconstruction methods based on single-view images","authors":"Jing Zhao,&nbsp;Pei Zhang,&nbsp;Yuqi Xue,&nbsp;Shida Gao,&nbsp;Yong Tang","doi":"10.1016/j.cag.2025.104287","DOIUrl":"10.1016/j.cag.2025.104287","url":null,"abstract":"<div><div>The simultaneous recovery of a human body’s 3D shape and surface color from a single image is a challenging task with numerous applications. To improve the accuracy of single-view 3D human reconstruction, we propose a comprehensive optimization method aimed at enhancing the quality of human shape and surface texture recovery. First, to address the issue of insufficient texture details in the invisible regions, we fuse rearview normal features with the SMPL-X human model features and front normal features as additional prior information. This fusion enhances texture reconstruction details in the invisible areas by local feature enhancement during normal map generation. Second, to tackle the problem of missing hands, we introduce the MediaPipe Hands keypoints detection algorithm. This algorithm optimizes the hand replacement process by accurately determining the visibility of the hands, ensuring high-quality replacement for the hands of the 3D human model. Finally, during the stage of 3D human model refinement, we implement an outlier removal algorithm. This algorithm effectively eliminates fragments from the edges of the 3D human model and optimizes the frontal texture by employing color texture mapping, which projects image pixel color information onto the surface of the 3D human model. Experimental results demonstrate that our proposed method outperforms existing techniques in terms of 3D human model shape recovery and surface texture fidelity, providing a novel solution for advancing 3D human reconstruction technology.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104287"},"PeriodicalIF":2.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144580013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3DGStrands: Personalized 3D Gaussian splatting for realistic hair representation and animation 3DGStrands:个性化的3D高斯喷溅,用于逼真的头发表示和动画
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-25 DOI: 10.1016/j.cag.2025.104261
Henar Dominguez-Elvira , Mario Alfonso-Arsuaga , Ana Barrueco-Garcia , Marc Comino-Trinidad
{"title":"3DGStrands: Personalized 3D Gaussian splatting for realistic hair representation and animation","authors":"Henar Dominguez-Elvira ,&nbsp;Mario Alfonso-Arsuaga ,&nbsp;Ana Barrueco-Garcia ,&nbsp;Marc Comino-Trinidad","doi":"10.1016/j.cag.2025.104261","DOIUrl":"10.1016/j.cag.2025.104261","url":null,"abstract":"<div><div>We introduce a novel method for generating a <strong>personalized</strong> 3D Gaussian Splatting (3DGS) hair representation from an unorganized set of photographs. Our approach begins by leveraging an out-of-the-shelf method to estimate a strand-organized point cloud representation of the hair. This point cloud serves as the foundation for constructing a 3DGS model that accurately preserves the hair’s geometric structure while visually fitting the appearance in the photographs. Our model seamlessly integrates with the standard 3DGS rendering pipeline, enabling efficient volumetric rendering of complex hairstyles. Furthermore, we demonstrate the versatility of our approach by applying the <strong>Material Point Method (MPM)</strong> to simulate realistic hair physics directly on the 3DGS model, achieving lifelike hair animation. To the best of our knowledge, this is the <strong>first method</strong> to simulate hair dynamics within a <strong>3DGS model</strong>. This work paves the way for future research that can leverage the flexible nature of 3DGS to fit more complex hair material models or enable physics properties estimation through dynamic tracking.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"131 ","pages":"Article 104261"},"PeriodicalIF":2.5,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144563536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DisenStyler: Text-driven fast image stylization using content disentanglement and style adaptive matching DisenStyler:使用内容分解和样式自适应匹配的文本驱动的快速图像样式化
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-06-21 DOI: 10.1016/j.cag.2025.104275
Huilin Liu, Qiong Fang, Caiping Xiang, Gaoming Yang
{"title":"DisenStyler: Text-driven fast image stylization using content disentanglement and style adaptive matching","authors":"Huilin Liu,&nbsp;Qiong Fang,&nbsp;Caiping Xiang,&nbsp;Gaoming Yang","doi":"10.1016/j.cag.2025.104275","DOIUrl":"10.1016/j.cag.2025.104275","url":null,"abstract":"<div><div>The emergence of the CLIP(Contrastive Language-Image Pre-Training) model has drawn widespread attention to text-driven image style transfer. However, existing methods are prone to content distortion when generating images and the transfer process is time-consuming. In this paper, we present DisenStyler, a novel Text-Driven Fast Image Stylization using Content Disentanglement and Style Adaptive Matching. The Global-Local Feature Disentanglement and Fusion (GLFDF) to fuse the content features extracted from the frequency and the spatial, enabling the detail information of the generated images can be well preserved. Furthermore, the Style Adaptive Matching Module (SAMM) is designed to map text features into the image space and conduct style adaptive matching by utilizing the means and variances of text and images. This not only significantly improves the speed of style transfer but also optimizes the local stylization effect of the generated images. Qualitative and quantitative experimental results show that the DisenStyler can better balance the content and style of the generated images while achieving fast image stylization.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104275"},"PeriodicalIF":2.5,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144366871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信