Graphical Models最新文献

筛选
英文 中文
High-performance Ellipsoidal Clipmaps 高性能椭球体剪贴图
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-12-01 DOI: 10.1016/j.gmod.2023.101209
Aleksandar Dimitrijević, Dejan Rančić
{"title":"High-performance Ellipsoidal Clipmaps","authors":"Aleksandar Dimitrijević,&nbsp;Dejan Rančić","doi":"10.1016/j.gmod.2023.101209","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101209","url":null,"abstract":"<div><p>This paper presents performance improvements for Ellipsoid Clipmaps, an out-of-core planet-sized geodetically accurate terrain rendering algorithm. The performance improvements were achieved by eliminating unnecessarily dense levels, more accurate block culling in the geographic coordinate system, and more efficient rendering methods. The elimination of unnecessarily dense levels is the result of analyzing and determining the optimal relative height of the viewer with respect to the most detailed level, resulting in the most consistent size of triangles across all visible levels. The proposed method for estimating the visibility of blocks based on view orientation allows rapid block-level view frustum culling performed in data space before visualization and spatial transformation of blocks. The use of a modern geometry pipeline through task and mesh shaders forced the handling of extremely fine granularity of blocks, but also shifted a significant part of the block culling process from CPU to the GPU. The approach described achieves high throughput and enables geodetically accurate rendering of the terrain based on the WGS 84 reference ellipsoid at very high resolution and in real time, with tens of millions of triangles with an average area of about 0.5 pix<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> on a 1080p screen on mid-range graphics cards.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101209"},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000395/pdfft?md5=26122c390b83d408f64d205c80bb4675&pid=1-s2.0-S1524070323000395-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138466486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling multi-style portrait relief from a single photograph 从一张照片建模多风格人像浮雕
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-11-28 DOI: 10.1016/j.gmod.2023.101210
Yu-Wei Zhang , Hongguang Yang , Ping Luo , Zhi Li , Hui Liu , Zhongping Ji , Caiming Zhang
{"title":"Modeling multi-style portrait relief from a single photograph","authors":"Yu-Wei Zhang ,&nbsp;Hongguang Yang ,&nbsp;Ping Luo ,&nbsp;Zhi Li ,&nbsp;Hui Liu ,&nbsp;Zhongping Ji ,&nbsp;Caiming Zhang","doi":"10.1016/j.gmod.2023.101210","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101210","url":null,"abstract":"<div><p>This paper aims at extending the method of Zhang et al. (2023) to produce not only portrait bas-reliefs from single photographs, but also high-depth reliefs with reasonable depth ordering. We cast this task as a problem of style-aware photo-to-depth translation, where the input is a photograph conditioned by a style vector and the output is a portrait relief with desired depth style. To construct ground-truth data for network training, we first propose an optimization-based method to synthesize high-depth reliefs from 3D portraits. Then, we train a normal-to-depth network to learn the mapping from normal maps to relief depths. After that, we use the trained network to generate high-depth relief samples using the provided normal maps from Zhang et al. (2023). As each normal map has pixel-wise photograph, we are able to establish correspondences between photographs and high-depth reliefs. By taking the bas-reliefs of Zhang et al. (2023), the new high-depth reliefs and their mixtures as target ground-truths, we finally train a encoder-to-decoder network to achieve style-aware relief modeling. Specially, the network is based on a U-shaped architecture, consisting of Swin Transformer blocks to process hierarchical deep features. Extensive experiments have demonstrated the effectiveness of the proposed method. Comparisons with previous works have verified its flexibility and state-of-the-art performance.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101210"},"PeriodicalIF":1.7,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000401/pdfft?md5=de53c7cacd318b65effd57ea40c70f18&pid=1-s2.0-S1524070323000401-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138454034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A decomposition scheme for continuous Level of Detail, streaming and lossy compression of unordered point clouds 无序点云的连续细节、流和有损压缩分解方案
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-11-08 DOI: 10.1016/j.gmod.2023.101208
Jan Martens, Jörg Blankenbach
{"title":"A decomposition scheme for continuous Level of Detail, streaming and lossy compression of unordered point clouds","authors":"Jan Martens,&nbsp;Jörg Blankenbach","doi":"10.1016/j.gmod.2023.101208","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101208","url":null,"abstract":"<div><p>Modern laser scanners, depth sensor devices and Dense Image Matching techniques allow for capturing of extensive point cloud datasets. While capturing has become more user-friendly, the size of registered point clouds results in large datasets which pose challenges for processing, storage and visualization. This paper presents a decomposition scheme using oriented KD trees and the wavelet transform for unordered point clouds. Taking inspiration from image pyramids, the decomposition scheme comes with a Level of Detail representation where higher-levels are progressively reconstructed from lower ones, thus making it suitable for streaming and continuous Level of Detail. Furthermore, the decomposed representation allows common compression techniques to achieve higher compression ratios by modifying the underlying frequency data at the cost of geometric accuracy and therefore allows for flexible lossy compression. After introducing this novel decomposition scheme, results are discussed to show how it deals with data captured from different sources.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101208"},"PeriodicalIF":1.7,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000383/pdfft?md5=acb2ab838184d4b7e97e6052e64a6ea6&pid=1-s2.0-S1524070323000383-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92047097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vertex position estimation with spatial–temporal transformer for 3D human reconstruction 基于时空变换的三维人体重构顶点位置估计
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-26 DOI: 10.1016/j.gmod.2023.101207
Xiangjun Zhang, Yinglin Zheng, Wenjin Deng, Qifeng Dai, Yuxin Lin, Wangzheng Shi, Ming Zeng
{"title":"Vertex position estimation with spatial–temporal transformer for 3D human reconstruction","authors":"Xiangjun Zhang,&nbsp;Yinglin Zheng,&nbsp;Wenjin Deng,&nbsp;Qifeng Dai,&nbsp;Yuxin Lin,&nbsp;Wangzheng Shi,&nbsp;Ming Zeng","doi":"10.1016/j.gmod.2023.101207","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101207","url":null,"abstract":"<div><p>Reconstructing 3D human pose and body shape from monocular images or videos is a fundamental task for comprehending human dynamics. Frame-based methods can be broadly categorized into two fashions: those regressing parametric model parameters (e.g., SMPL) and those exploring alternative representations (e.g., volumetric shapes, 3D coordinates). Non-parametric representations have demonstrated superior performance due to their enhanced flexibility. However, when applied to video data, these non-parametric frame-based methods tend to generate inconsistent and unsmooth results. To this end, we present a novel approach that directly regresses the 3D coordinates of the mesh vertices and body joints with a spatial–temporal Transformer. In our method, we introduce a SpatioTemporal Learning Block (STLB) with Spatial Learning Module (SLM) and Temporal Learning Module (TLM), which leverages spatial and temporal information to model interactions at a finer granularity, specifically at the body token level. Our method outperforms previous state-of-the-art approaches on Human3.6M and 3DPW benchmark datasets.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101207"},"PeriodicalIF":1.7,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000371/pdfft?md5=a920877b3ee3210b23f7a6444d151f50&pid=1-s2.0-S1524070323000371-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92047096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic approach for enhancement of homogeneous background images using structural information 利用结构信息增强均匀背景图像的系统方法
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-25 DOI: 10.1016/j.gmod.2023.101206
D. Vijayalakshmi , Malaya Kumar Nath
{"title":"A systematic approach for enhancement of homogeneous background images using structural information","authors":"D. Vijayalakshmi ,&nbsp;Malaya Kumar Nath","doi":"10.1016/j.gmod.2023.101206","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101206","url":null,"abstract":"<div><p>Image enhancement is an indispensable pre-processing step for several image processing applications. Mainly, histogram equalization is one of the widespread techniques used by various researchers to improve the image quality by expanding the pixel values to fill the entire dynamic grayscale. It results in the visual artifact, structural information loss near edges due to the information loss (due to many-to-one mapping), and alteration in average luminance to a higher value. This paper proposes an enhancement algorithm based on structural information for homogeneous background images. The intensities are divided into two segments using the median value to preserve the average luminance. Unlike traditional techniques, this algorithm incorporates the spatial locations in the equalization process instead of the number of intensity values occurrences. The occurrences of each intensity concerning their spatial locations are combined using Rènyi entropy to enumerate a discrete function. An adaptive clipping limit is applied to the discrete function to control the enhancement rate. Then histogram equalization is performed on each segment separately, and the equalized segments are integrated to produce an enhanced image. The algorithm’s effectiveness is validated by evaluating the proposed method on CEED, CSIQ, LOL, and TID2013 databases. Experimental results reveal that the proposed method improves the contrast while preserving structural information, detail information, and average luminance. They are quantified by the high value of contrast improvement index, structural similarity index, and discrete entropy, and low value of average mean brightness error values of the proposed method when compared with the methods available in the literature, including deep learning architectures.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101206"},"PeriodicalIF":1.7,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S152407032300036X/pdfft?md5=66c749d2624c0d77acd46a4f2037626a&pid=1-s2.0-S152407032300036X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92047095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jrender: An efficient differentiable rendering library based on Jittor Jrender:一个基于Jittor的高效可微分渲染库
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-18 DOI: 10.1016/j.gmod.2023.101202
Hanggao Xin, Chenzhong Xiang, Wenyang Zhou, Dun Liang
{"title":"Jrender: An efficient differentiable rendering library based on Jittor","authors":"Hanggao Xin,&nbsp;Chenzhong Xiang,&nbsp;Wenyang Zhou,&nbsp;Dun Liang","doi":"10.1016/j.gmod.2023.101202","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101202","url":null,"abstract":"<div><p>Differentiable rendering has been proven as a powerful tool to bridge 2D images and 3D models. With the aid of differentiable rendering, tasks in computer vision and computer graphics could be solved more elegantly and accurately. To address challenges in the implementations of differentiable rendering methods, we present an efficient and modular differentiable rendering library named Jrender based on Jittor. Jrender supports surface rendering for 3D meshes and volume rendering for 3D volumes. Compared with previous differentiable renderers, Jrender exhibits a significant improvement in both performance and rendering quality. Due to the modular design, various rendering effects such as PBR materials shading, ambient occlusions, soft shadows, global illumination, and subsurface scattering could be easily supported in Jrender, which are not available in other differentiable rendering libraries. To validate our library, we integrate Jrender into applications such as 3D object reconstruction and NeRF, which show that our implementations could achieve the same quality with higher performance.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101202"},"PeriodicalIF":1.7,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Packing problems on generalised regular grid: Levels of abstraction using integer linear programming 广义正则网格上的填充问题:用整数线性规划的抽象层次
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-07 DOI: 10.1016/j.gmod.2023.101205
Hao Hua , Benjamin Dillenburger
{"title":"Packing problems on generalised regular grid: Levels of abstraction using integer linear programming","authors":"Hao Hua ,&nbsp;Benjamin Dillenburger","doi":"10.1016/j.gmod.2023.101205","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101205","url":null,"abstract":"<div><p>Packing a designated set of shapes on a regular grid is an important class of operations research problems that has been intensively studied for more than six decades. Representing a <span><math><mi>d</mi></math></span>-dimensional discrete grid as <span><math><msup><mrow><mi>Z</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span>, we formalise the generalised regular grid (GRG) as a surjective function from <span><math><msup><mrow><mi>Z</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> to a geometric tessellation in a physical space, for example, the cube coordinates of a hexagonal grid or a quasilattice. This study employs 0-1 integer linear programming (ILP) to formulate the polyomino tiling problem with adjacency constraints. Rotation &amp; reflection invariance in adjacency are considered. We separate the formal ILP from the topology &amp; geometry of various grids, such as Ammann-Beenker tiling, Penrose tiling and periodic hypercube. Based on cutting-edge solvers, we reveal an intuitive correspondence between the integer program (a pattern of algebraic rules) and the computer codes. Models of packing problems in the GRG have wide applications in production system, facility layout planning, and architectural design. Two applications in planning high-rise residential apartments are illustrated.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101205"},"PeriodicalIF":1.7,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RFMNet: Robust Deep Functional Maps for unsupervised non-rigid shape correspondence RFMNet:无监督非刚性形状对应的鲁棒深度函数映射
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101189
Ling Hu , Qinsong Li , Shengjun Liu , Dong-Ming Yan , Haojun Xu , Xinru Liu
{"title":"RFMNet: Robust Deep Functional Maps for unsupervised non-rigid shape correspondence","authors":"Ling Hu ,&nbsp;Qinsong Li ,&nbsp;Shengjun Liu ,&nbsp;Dong-Ming Yan ,&nbsp;Haojun Xu ,&nbsp;Xinru Liu","doi":"10.1016/j.gmod.2023.101189","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101189","url":null,"abstract":"<div><p>In traditional deep functional maps for non-rigid shape correspondence, estimating a functional map including high-frequency information requires enough linearly independent features via the least square method, which is prone to be violated in practice, especially at an early stage of training, or costly post-processing, e.g. ZoomOut. In this paper, we propose a novel method called RFMNet (<strong>R</strong>obust Deep <strong>F</strong>unctional <strong>M</strong>ap <strong>Net</strong>works), which jointly considers training stability and more geometric shape features than previous works. We directly first produce a pointwise map by resorting to optimal transport and then convert it to an initial functional map. Such a mechanism mitigates the requirements for the descriptor and avoids the training instabilities resulting from the least square solver. Benefitting from the novel strategy, we successfully integrate a state-of-the-art geometric regularization for further optimizing the functional map, which substantially filters the initial functional map. We show our novel computing functional map module brings more stable training even under encoding the functional map with high-frequency information and faster convergence speed. Considering the pointwise and functional maps, an unsupervised loss is presented for penalizing the correspondence distortion of Delta functions between shapes. To catch discretization-resistant and orientation-aware shape features with our network, we utilize DiffusionNet as a feature extractor. Experimental results demonstrate our apparent superiority in correspondence quality and generalization across various shape discretizations and different datasets compared to the state-of-the-art learning methods.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101189"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MixNet: Mix different networks for learning 3D implicit representations MixNet:混合不同的网络来学习3D隐式表示
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101190
Bowen Lyu , Li-Yong Shen , Chun-Ming Yuan
{"title":"MixNet: Mix different networks for learning 3D implicit representations","authors":"Bowen Lyu ,&nbsp;Li-Yong Shen ,&nbsp;Chun-Ming Yuan","doi":"10.1016/j.gmod.2023.101190","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101190","url":null,"abstract":"<div><p>We introduce a neural network, MixNet, for learning implicit representations of 3D subtle models with large smooth areas and exact shape details in the form of interpolation of two different implicit functions. Our network takes a point cloud as input and uses conventional MLP networks and SIREN networks to predict different implicit fields. We use a learnable interpolation function to combine the implicit values of these two networks and achieve the respective advantages of them. The network is self-supervised with only reconstruction loss, leading to faithful 3D reconstructions with smooth planes, correct details, and plausible spatial partition without any ground-truth segmentation. We evaluate our method on ABC, the largest and most diverse CAD dataset, and some typical shapes to test in terms of geometric correctness and surface smoothness to demonstrate superiority over current alternatives suitable for shape reconstruction.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101190"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49890152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast progressive polygonal approximations for online strokes 快速渐进多边形近似在线笔画
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101200
Mohammad Tanvir Parvez
{"title":"Fast progressive polygonal approximations for online strokes","authors":"Mohammad Tanvir Parvez","doi":"10.1016/j.gmod.2023.101200","DOIUrl":"10.1016/j.gmod.2023.101200","url":null,"abstract":"<div><p>This paper presents a fast and progressive polygonal approximation algorithm for online strokes. A stroke is defined as a sequence of points between a pen-down and a pen-up. The proposed method generates polygonal approximations progressively as the user inputs the stroke. The proposed algorithm is suitable for real time shape modeling and retrieval. The number of operations used in the proposed algorithm is bounded by O(<em>n</em>), where <em>n</em> is the number of points in a stroke. Detailed experimental results show that the proposed method is not only fast, but also accurate enough compared to other reported algorithms.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101200"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43203570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信