Graphical ModelsPub Date : 2024-07-30DOI: 10.1016/j.gmod.2024.101223
Yinglin Zheng , Ting Zhang , Jianmin Bao , Dong Chen , Ming Zeng
{"title":"High-fidelity instructional fashion image editing","authors":"Yinglin Zheng , Ting Zhang , Jianmin Bao , Dong Chen , Ming Zeng","doi":"10.1016/j.gmod.2024.101223","DOIUrl":"10.1016/j.gmod.2024.101223","url":null,"abstract":"<div><p>Instructional image editing has received a significant surge of attention recently. In this work, we are interested in the challenging problem of instructional image editing within the particular fashion realm, a domain with significant potential demand in both commercial and personal contexts. This specific domain presents heightened challenges owing to the stringent quality requirements. It necessitates not only the creation of vivid details in alignment with instructions, but also the preservation of precise attributes unrelated to the text guidance. Naive extensions of existing image editing methods produce noticeable artifacts. In order to achieve high-fidelity fashion editing, we propose a novel framework, leveraging the generative prior of a pre-trained human generator and performing edit in the latent space. In addition, we introduce a novel CLIP-based loss to better align the generated target with the instruction. Extensive experiments demonstrate that our approach outperforms prior works including GAN-based editing as well as diffusion-based editing by a large margin, showing impressive visual quality.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101223"},"PeriodicalIF":2.5,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000110/pdfft?md5=480bdc352d9fc3901d6a01e1e2794553&pid=1-s2.0-S1524070324000110-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141886752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-07-03DOI: 10.1016/j.gmod.2024.101222
Yongwei Nie , Meihua Zhao , Qing Zhang , Ping Li , Jian Zhu , Hongmin Cai
{"title":"Make static person walk again via separating pose action from shape","authors":"Yongwei Nie , Meihua Zhao , Qing Zhang , Ping Li , Jian Zhu , Hongmin Cai","doi":"10.1016/j.gmod.2024.101222","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101222","url":null,"abstract":"<div><p>This paper addresses the problem of animating a person in static images, the core task of which is to infer future poses for the person. Existing approaches predict future poses in the 2D space, suffering from entanglement of pose action and shape. We propose a method that generates actions in the 3D space and then transfers them to the 2D person. We first lift the 2D pose of the person to a 3D skeleton, then propose a 3D action synthesis network predicting future skeletons, and finally devise a self-supervised action transfer network that transfers the actions of 3D skeletons to the 2D person. Actions generated in the 3D space look plausible and vivid. More importantly, self-supervised action transfer allows our method to be trained only on a 3D MoCap dataset while being able to process images in different domains. Experiments on three image datasets validate the effectiveness of our method.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"134 ","pages":"Article 101222"},"PeriodicalIF":2.5,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000109/pdfft?md5=625da7fe01537f9691e2758137e210d0&pid=1-s2.0-S1524070324000109-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-06-21DOI: 10.1016/j.gmod.2024.101221
Fei Ren , Chunhua Liao , Zhina Xie
{"title":"Bilateral transformer 3D planar recovery","authors":"Fei Ren , Chunhua Liao , Zhina Xie","doi":"10.1016/j.gmod.2024.101221","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101221","url":null,"abstract":"<div><p>In recent years, deep learning based methods for single image 3D planar recovery have made significant progress, but most of the research has focused on overall plane segmentation performance rather than the accuracy of small scale plane segmentation. In order to solve the problem of feature loss in the feature extraction process of small target object features, a three dimensional planar recovery method based on bilateral transformer was proposed. The two sided network branches capture rich small object target features through different scale sampling, and are used for detecting planar and non-planar regions respectively. In addition, the loss of variational information is used to share the parameters of the bilateral network, which achieves the output consistency of the bilateral network and alleviates the problem of feature loss of small target objects. The method is verified on Scannet and Nyu V2 datasets, and a variety of evaluation indexes are superior to the current popular algorithms, proving the effectiveness of the method in three dimensional planar recovery.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"134 ","pages":"Article 101221"},"PeriodicalIF":2.5,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000092/pdfft?md5=b6e8dcdf8c08f479bd4a08431705f4a8&pid=1-s2.0-S1524070324000092-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-06-01DOI: 10.1016/j.gmod.2024.101219
Peisheng Zhuo , Zitong He , Hongwei Lin
{"title":"Persistent geometry-topology descriptor for porous structure retrieval based on Heat Kernel Signature","authors":"Peisheng Zhuo , Zitong He , Hongwei Lin","doi":"10.1016/j.gmod.2024.101219","DOIUrl":"10.1016/j.gmod.2024.101219","url":null,"abstract":"<div><p>Porous structures are essential in a variety of fields such as materials science and chemistry. To retrieve porous materials efficiently, novel descriptors are required to quantify the geometric and topological features. In this paper, we present a novel framework to create a descriptor that incorporates both topological and geometric information of a porous structure. To capture geometric information, we keep track of the <span><math><mrow><mi>b</mi><mi>i</mi><mi>r</mi><mi>t</mi><mi>h</mi><mspace></mspace><mspace></mspace><mi>t</mi><mi>i</mi><mi>m</mi><mi>e</mi></mrow></math></span> and <span><math><mrow><mi>d</mi><mi>e</mi><mi>a</mi><mi>t</mi><mi>h</mi><mspace></mspace><mspace></mspace><mi>t</mi><mi>i</mi><mi>m</mi><mi>e</mi></mrow></math></span> of the <span><math><mrow><mi>p</mi><mi>e</mi><mi>r</mi><mi>s</mi><mi>i</mi><mi>s</mi><mi>t</mi><mi>e</mi><mi>n</mi><mi>t</mi><mspace></mspace><mi>f</mi><mi>e</mi><mi>a</mi><mi>t</mi><mi>u</mi><mi>r</mi><mi>e</mi></mrow></math></span>s of a real-valued function on the surface that evolves with a parameter. Then, we generate the corresponding <span><math><mrow><mi>p</mi><mi>e</mi><mi>r</mi><mi>s</mi><mi>i</mi><mi>s</mi><mi>t</mi><mi>e</mi><mi>n</mi><mi>t</mi><mspace></mspace><mspace></mspace><mi>f</mi><mi>e</mi><mi>a</mi><mi>t</mi><mi>u</mi><mi>r</mi><mi>e</mi><mspace></mspace><mspace></mspace><mi>d</mi><mi>i</mi><mi>a</mi><mi>g</mi><mi>r</mi><mi>a</mi><mi>m</mi></mrow></math></span> (<span><math><mrow><mi>D</mi><mi>g</mi><msub><mrow><mi>m</mi></mrow><mrow><mi>P</mi><mi>F</mi></mrow></msub></mrow></math></span>) and convert it into a vector called <span><math><mrow><mi>p</mi><mi>e</mi><mi>r</mi><mi>s</mi><mi>i</mi><mi>s</mi><mi>t</mi><mi>e</mi><mi>n</mi><mi>c</mi><mi>e</mi><mspace></mspace><mspace></mspace><mi>f</mi><mi>e</mi><mi>a</mi><mi>t</mi><mi>u</mi><mi>r</mi><mi>e</mi><mspace></mspace><mspace></mspace><mi>d</mi><mi>e</mi><mi>s</mi><mi>c</mi><mi>r</mi><mi>i</mi><mi>p</mi><mi>t</mi><mi>o</mi><mi>r</mi></mrow></math></span> (PFD). To extract topological information, we sample points from the pore surface and compute the corresponding persistence diagram, which is then transformed into the Persistence B-Spline Grids (PBSG). Our proposed descriptor, namely <span><math><mrow><mi>p</mi><mi>e</mi><mi>r</mi><mi>s</mi><mi>i</mi><mi>s</mi><mi>t</mi><mi>e</mi><mi>n</mi><mi>t</mi><mspace></mspace><mspace></mspace><mi>g</mi><mi>e</mi><mi>o</mi><mi>m</mi><mi>e</mi><mi>t</mi><mi>r</mi><mi>y</mi><mo>−</mo><mi>t</mi><mi>o</mi><mi>p</mi><mi>o</mi><mi>l</mi><mi>o</mi><mi>g</mi><mi>y</mi><mspace></mspace><mspace></mspace><mi>d</mi><mi>e</mi><mi>s</mi><mi>c</mi><mi>r</mi><mi>i</mi><mi>p</mi><mi>t</mi><mi>o</mi><mi>r</mi></mrow></math></span> (PGTD), is obtained by concatenating PFD with PBSG. In our experiments, we use the heat kernel signature (HKS) as the real-valued function to compute the descriptor. We test the method on a synthetic porous dataset and a zeolite dataset and find that it is competitive compa","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101219"},"PeriodicalIF":1.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000079/pdfft?md5=499cdacea6ff6d72e1f6c905040f66c2&pid=1-s2.0-S1524070324000079-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141232284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-05-25DOI: 10.1016/j.gmod.2024.101220
Weiping Pan
{"title":"An exact algorithm for two-dimensional cutting problems based on multi-level pattern","authors":"Weiping Pan","doi":"10.1016/j.gmod.2024.101220","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101220","url":null,"abstract":"<div><p>A multi-level pattern is proposed for the unconstrained two-dimensional cutting problems of rectangular items, and an exact generation algorithm is constructed. The arrangement of rectangular items with the same type in multiple rows and columns is referred to as a 0-level pattern. An <em>n</em>-level pattern is the horizontal or vertical combination of an <em>n</em>-1 level pattern with a pattern whose level will not exceed <em>n</em>-1. The generation algorithm of multi-level pattern is constructed on the base of dynamic programming, and the multi-level patterns with various sizes are generated with increased level order. The normal size is chosen to reduce unnecessary computation in the algorithm. Three sets of benchmark instances and one set of random production instance from the literatures are used for comparison. Comparing to the exact algorithm in the literature, the results in this paper are equivalent, but the computation time is shorter. Comparing to heuristic algorithms in literatures, the results in this paper are better and the computation time is still good enough for practical applications.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101220"},"PeriodicalIF":1.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000080/pdfft?md5=7ba46c24bfd0defb95fae7879ef5f757&pid=1-s2.0-S1524070324000080-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141095105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-04-11DOI: 10.1016/j.gmod.2024.101218
Kangrui Zhang , Han Yan , Jia-Ming Lu , Bo Ren
{"title":"Rod-Bonded Discrete Element Method","authors":"Kangrui Zhang , Han Yan , Jia-Ming Lu , Bo Ren","doi":"10.1016/j.gmod.2024.101218","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101218","url":null,"abstract":"<div><p>The Bonded Discrete Element Method (BDEM) has raised interests in the graphics community in recent years because of its good performance in fracture simulations. However, current explicit BDEM usually needs to work under very small time steps to avoid numerical instability. We propose a new BDEM, namely Rod-BDEM (RBDEM), which uses Cosserat energy and yields integrable forces and torques. We further derive a novel Cosserat rod discretization method to effectively represent the three-dimensional topological connections between discrete elements. Then, a complete implicit BDEM system integrating the appropriate fracture model and contact model is constructed using the implicit Euler integration scheme. Our method allows high Young’s modulus and larger time steps in elastic deformation, breaking, cracking, and impacting, achieving up to 8 times speed up of the total simulation.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101218"},"PeriodicalIF":1.7,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000067/pdfft?md5=8b3b530fbeef40a4b36880d3c7a36d0c&pid=1-s2.0-S1524070324000067-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140542596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-03-20DOI: 10.1016/j.gmod.2024.101217
Zeyu Huang , Sisi Dai , Kai Xu , Hao Zhang , Hui Huang , Ruizhen Hu
{"title":"DINA: Deformable INteraction Analogy","authors":"Zeyu Huang , Sisi Dai , Kai Xu , Hao Zhang , Hui Huang , Ruizhen Hu","doi":"10.1016/j.gmod.2024.101217","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101217","url":null,"abstract":"<div><p>We introduce deformable interaction analogy (DINA) as a means to generate close interactions between two 3D objects. Given a single demo interaction between an anchor object (e.g. a hand) and a source object (e.g. a mug grasped by the hand), our goal is to generate many analogous 3D interactions between the same anchor object and various new target objects (e.g. a toy airplane), where the anchor object is allowed to be rigid or deformable. To this end, we optimize the pose or shape of the anchor object to adapt it to a new target object to mimic the demo. To facilitate the optimization, we advocate using interaction interface (ITF), defined by a set of points sampled on the anchor object, as a descriptive and robust interaction representation that is amenable to non-rigid deformation. We model similarity between interactions using ITF, while for interaction analogy, we transform the ITF, either rigidly or non-rigidly, to guide the feature matching to the reposing and deformation of the anchor object. Qualitative and quantitative experiments show that our ITF-guided deformable interaction analogy works surprisingly well even with simple distance features compared to variants of state-of-the-art methods that utilize more sophisticated interaction representations and feature learning from large datasets.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101217"},"PeriodicalIF":1.7,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000055/pdfft?md5=94513ad40e92d864add74b24ff48c5e2&pid=1-s2.0-S1524070324000055-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-03-18DOI: 10.1016/j.gmod.2024.101216
Qun-Ce Xu , Yong-Liang Yang , Bailin Deng
{"title":"Point cloud denoising using a generalized error metric","authors":"Qun-Ce Xu , Yong-Liang Yang , Bailin Deng","doi":"10.1016/j.gmod.2024.101216","DOIUrl":"10.1016/j.gmod.2024.101216","url":null,"abstract":"<div><p>Effective removal of noises from raw point clouds while preserving geometric features is the key challenge for point cloud denoising. To address this problem, we propose a novel method that jointly optimizes the point positions and normals. To preserve geometric features, our formulation uses a generalized robust error metric to enforce piecewise smoothness of the normal vector field as well as consistency between point positions and normals. By varying the parameter of the error metric, we gradually increase its non-convexity to guide the optimization towards a desirable solution. By combining alternating minimization with a majorization-minimization strategy, we develop a numerical solver for the optimization which guarantees convergence. The effectiveness of our method is demonstrated by extensive comparisons with previous works.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101216"},"PeriodicalIF":1.7,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000043/pdfft?md5=48a1964c4abbec912ee9a17b6f0212cf&pid=1-s2.0-S1524070324000043-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140151694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-03-13DOI: 10.1016/j.gmod.2024.101215
Xiaokang Liu , Lin Lu , Lingxin Cao , Oliver Deussen , Changhe Tu
{"title":"Auxetic dihedral Escher tessellations","authors":"Xiaokang Liu , Lin Lu , Lingxin Cao , Oliver Deussen , Changhe Tu","doi":"10.1016/j.gmod.2024.101215","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101215","url":null,"abstract":"<div><p>The auxetic structure demonstrates an unconventional deployable mechanism, expanding in transverse directions while being stretched longitudinally (exhibiting a negative Poisson’s ratio). This characteristic offers advantages in diverse fields such as structural engineering, flexible electronics, and medicine. The rotating (semi-)rigid structure, as a typical auxetic structure, has been introduced into the field of computer-aided design because of its well-defined motion patterns. These structures find application as deployable structures in various endeavors aiming to approximate and rapidly fabricate doubly-curved surfaces, thereby mitigating the challenges associated with their production and transportation. Nevertheless, prior designs relying on basic geometric elements primarily concentrate on exploring the inherent nature of the structure and often lack aesthetic appeal. To address this limitation, we propose a novel design and generation method inspired by dihedral Escher tessellations. By introducing a new metric function, we achieve efficient evaluation of shape deployability as well as filtering of tessellations, followed by a two-step deformation and edge-deployability optimization process to ensure compliance with deployability constraints while preserving semantic meanings. Furthermore, we optimize the shape through physical simulation to guarantee deployability in actual manufacturing and control Poisson’s ratio to a certain extent. Our method yields structures that are both semantically meaningful and aesthetically pleasing, showcasing promising potential for auxetic applications.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101215"},"PeriodicalIF":1.7,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000031/pdfft?md5=ee39dfa2350ffc88d6645119c393baed&pid=1-s2.0-S1524070324000031-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140122375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-02-28DOI: 10.1016/j.gmod.2024.101214
Bowen Lyu , Li-Yong Shen , Chun-Ming Yuan
{"title":"IGF-Fit: Implicit gradient field fitting for point cloud normal estimation","authors":"Bowen Lyu , Li-Yong Shen , Chun-Ming Yuan","doi":"10.1016/j.gmod.2024.101214","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101214","url":null,"abstract":"<div><p>We introduce IGF-Fit, a novel method for estimating surface normals from point clouds with varying noise and density. Unlike previous approaches that rely on point-wise weights and explicit representations, IGF-Fit employs a network that learns an implicit representation and uses derivatives to predict normals. The input patch serves as both a shape latent vector and query points for fitting the implicit representation. To handle noisy input, we introduce a novel noise transformation module with a training strategy for noise classification and latent vector bias prediction. Our experiments on synthetic and real-world scan datasets demonstrate the effectiveness of IGF-Fit, achieving state-of-the-art performance on both noise-free and density-varying data.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101214"},"PeriodicalIF":1.7,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S152407032400002X/pdfft?md5=49f2d24bca30ab2fb9811c74fa197c78&pid=1-s2.0-S152407032400002X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139993418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}