Graphical Models最新文献

筛选
英文 中文
Image restoration for digital line drawings using line masks 使用线条遮罩修复数字线条图的图像
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-08-20 DOI: 10.1016/j.gmod.2024.101226
Yan Zhu, Yasushi Yamaguchi
{"title":"Image restoration for digital line drawings using line masks","authors":"Yan Zhu,&nbsp;Yasushi Yamaguchi","doi":"10.1016/j.gmod.2024.101226","DOIUrl":"10.1016/j.gmod.2024.101226","url":null,"abstract":"<div><p>The restoration of digital images holds practical significance due to the fact that degradation of digital image data on the internet is common. State-of-the-art image restoration methods usually employ end-to-end trained networks. However, we argue that a network trained with diverse image pairs is not optimal for restoring line drawings which have extensive plain backgrounds. We propose a line-drawing restoration framework which takes a restoration neural network as backbone and processes an input degraded line drawing in two steps. First, a proposed mask-predicting network predicts a line mask which indicates the possible location of foreground and background in the potential original line drawing. Next, we feed the degraded input line drawing together with the predicted line mask into the backbone restoration network. The traditional <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> loss for the backbone restoration network is substituted with a masked Mean Square Error (MSE) loss. We test our framework on two classical image restoration tasks: JPEG restoration and super-resolution, and experiments demonstrate that our framework can achieve better quantitative and visual results in most cases.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101226"},"PeriodicalIF":2.5,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000146/pdfft?md5=58619f9331f768a8dedffc9dc70f4dbb&pid=1-s2.0-S1524070324000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstruction of the bending line for free-form bent components extracting the centroids and exploiting NURBS curves 通过提取中心点和利用 NURBS 曲线重构自由形态弯曲部件的弯曲线
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-08-19 DOI: 10.1016/j.gmod.2024.101227
Lorenzo Scandola, Maximilian Erber, Philipp Hagenlocher, Florian Steinlehner, Wolfram Volk
{"title":"Reconstruction of the bending line for free-form bent components extracting the centroids and exploiting NURBS curves","authors":"Lorenzo Scandola,&nbsp;Maximilian Erber,&nbsp;Philipp Hagenlocher,&nbsp;Florian Steinlehner,&nbsp;Wolfram Volk","doi":"10.1016/j.gmod.2024.101227","DOIUrl":"10.1016/j.gmod.2024.101227","url":null,"abstract":"<div><p>Free-form bending belongs to the kinematics-based forming processes and allows the manufacturing of arbitrary 3D-bent components. To obtain the desired part, the tool kinematics is adjusted by comparing the target and obtained bending line. While the target geometry consists of parametric CAD data, the obtained geometry is a surface mesh, making the bending line extraction a challenging task. In this paper the reconstruction of the bending line for free-form bent components is presented. The strategy relies on the extraction of the centroids, for which a ray casting algorithm is developed and compared to an existing Voronoi-based method. Subsequently the obtained points are used to fit a NURBS parametric model of the curve. The algorithm parameters are investigated with a sensitivity analysis, and its performance is evaluated with a defined error metric. Finally, the strategy is validated comparing its results with a Voronoi-based algorithm, and investigating different cross-sections and geometries.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101227"},"PeriodicalIF":2.5,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000158/pdfft?md5=5ae58aca47e71146ef63b6cd34d29835&pid=1-s2.0-S1524070324000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mesh deformation-based single-view 3D reconstruction of thin eyeglasses frames with differentiable rendering 基于网格变形的单视角薄眼镜架三维重建与可变渲染
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-08-09 DOI: 10.1016/j.gmod.2024.101225
Fan Zhang , Ziyue Ji , Weiguang Kang , Weiqing Li , Zhiyong Su
{"title":"Mesh deformation-based single-view 3D reconstruction of thin eyeglasses frames with differentiable rendering","authors":"Fan Zhang ,&nbsp;Ziyue Ji ,&nbsp;Weiguang Kang ,&nbsp;Weiqing Li ,&nbsp;Zhiyong Su","doi":"10.1016/j.gmod.2024.101225","DOIUrl":"10.1016/j.gmod.2024.101225","url":null,"abstract":"<div><p>With the support of Virtual Reality (VR) and Augmented Reality (AR) technologies, the 3D virtual eyeglasses try-on application is well on its way to becoming a new trending solution that offers a “try on” option to select the perfect pair of eyeglasses at the comfort of your own home. Reconstructing eyeglasses frames from a single image with traditional depth and image-based methods is extremely difficult due to their unique characteristics such as lack of sufficient texture features, thin elements, and severe self-occlusions. In this paper, we propose the first mesh deformation-based reconstruction framework for recovering high-precision 3D full-frame eyeglasses models from a single RGB image, leveraging prior and domain-specific knowledge. Specifically, based on the construction of a synthetic eyeglasses frame dataset, we first define a class-specific eyeglasses frame template with pre-defined keypoints. Then, given an input eyeglasses frame image with thin structure and few texture features, we design a keypoint detector and refiner to detect predefined keypoints in a coarse-to-fine manner to estimate the camera pose accurately. After that, using differentiable rendering, we propose a novel optimization approach for producing correct geometry by progressively performing free-form deformation (FFD) on the template mesh. We define a series of loss functions to enforce consistency between the rendered result and the corresponding RGB input, utilizing constraints from inherent structure, silhouettes, keypoints, per-pixel shading information, and so on. Experimental results on both the synthetic dataset and real images demonstrate the effectiveness of the proposed algorithm.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101225"},"PeriodicalIF":2.5,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000134/pdfft?md5=429e33b8e8d8f39cf8d47fa19b9c19f2&pid=1-s2.0-S1524070324000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-fidelity instructional fashion image editing 高保真时尚图像编辑教学
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-07-30 DOI: 10.1016/j.gmod.2024.101223
Yinglin Zheng , Ting Zhang , Jianmin Bao , Dong Chen , Ming Zeng
{"title":"High-fidelity instructional fashion image editing","authors":"Yinglin Zheng ,&nbsp;Ting Zhang ,&nbsp;Jianmin Bao ,&nbsp;Dong Chen ,&nbsp;Ming Zeng","doi":"10.1016/j.gmod.2024.101223","DOIUrl":"10.1016/j.gmod.2024.101223","url":null,"abstract":"<div><p>Instructional image editing has received a significant surge of attention recently. In this work, we are interested in the challenging problem of instructional image editing within the particular fashion realm, a domain with significant potential demand in both commercial and personal contexts. This specific domain presents heightened challenges owing to the stringent quality requirements. It necessitates not only the creation of vivid details in alignment with instructions, but also the preservation of precise attributes unrelated to the text guidance. Naive extensions of existing image editing methods produce noticeable artifacts. In order to achieve high-fidelity fashion editing, we propose a novel framework, leveraging the generative prior of a pre-trained human generator and performing edit in the latent space. In addition, we introduce a novel CLIP-based loss to better align the generated target with the instruction. Extensive experiments demonstrate that our approach outperforms prior works including GAN-based editing as well as diffusion-based editing by a large margin, showing impressive visual quality.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101223"},"PeriodicalIF":2.5,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000110/pdfft?md5=480bdc352d9fc3901d6a01e1e2794553&pid=1-s2.0-S1524070324000110-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141886752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Make static person walk again via separating pose action from shape 通过将姿势动作与形状分离,让静止的人重新行走
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-07-03 DOI: 10.1016/j.gmod.2024.101222
Yongwei Nie , Meihua Zhao , Qing Zhang , Ping Li , Jian Zhu , Hongmin Cai
{"title":"Make static person walk again via separating pose action from shape","authors":"Yongwei Nie ,&nbsp;Meihua Zhao ,&nbsp;Qing Zhang ,&nbsp;Ping Li ,&nbsp;Jian Zhu ,&nbsp;Hongmin Cai","doi":"10.1016/j.gmod.2024.101222","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101222","url":null,"abstract":"<div><p>This paper addresses the problem of animating a person in static images, the core task of which is to infer future poses for the person. Existing approaches predict future poses in the 2D space, suffering from entanglement of pose action and shape. We propose a method that generates actions in the 3D space and then transfers them to the 2D person. We first lift the 2D pose of the person to a 3D skeleton, then propose a 3D action synthesis network predicting future skeletons, and finally devise a self-supervised action transfer network that transfers the actions of 3D skeletons to the 2D person. Actions generated in the 3D space look plausible and vivid. More importantly, self-supervised action transfer allows our method to be trained only on a 3D MoCap dataset while being able to process images in different domains. Experiments on three image datasets validate the effectiveness of our method.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"134 ","pages":"Article 101222"},"PeriodicalIF":2.5,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000109/pdfft?md5=625da7fe01537f9691e2758137e210d0&pid=1-s2.0-S1524070324000109-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bilateral transformer 3D planar recovery 双边变压器三维平面恢复
IF 2.5 4区 计算机科学
Graphical Models Pub Date : 2024-06-21 DOI: 10.1016/j.gmod.2024.101221
Fei Ren , Chunhua Liao , Zhina Xie
{"title":"Bilateral transformer 3D planar recovery","authors":"Fei Ren ,&nbsp;Chunhua Liao ,&nbsp;Zhina Xie","doi":"10.1016/j.gmod.2024.101221","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101221","url":null,"abstract":"<div><p>In recent years, deep learning based methods for single image 3D planar recovery have made significant progress, but most of the research has focused on overall plane segmentation performance rather than the accuracy of small scale plane segmentation. In order to solve the problem of feature loss in the feature extraction process of small target object features, a three dimensional planar recovery method based on bilateral transformer was proposed. The two sided network branches capture rich small object target features through different scale sampling, and are used for detecting planar and non-planar regions respectively. In addition, the loss of variational information is used to share the parameters of the bilateral network, which achieves the output consistency of the bilateral network and alleviates the problem of feature loss of small target objects. The method is verified on Scannet and Nyu V2 datasets, and a variety of evaluation indexes are superior to the current popular algorithms, proving the effectiveness of the method in three dimensional planar recovery.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"134 ","pages":"Article 101221"},"PeriodicalIF":2.5,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000092/pdfft?md5=b6e8dcdf8c08f479bd4a08431705f4a8&pid=1-s2.0-S1524070324000092-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Persistent geometry-topology descriptor for porous structure retrieval based on Heat Kernel Signature 基于热核特征的多孔结构检索持久几何拓扑描述符
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2024-06-01 DOI: 10.1016/j.gmod.2024.101219
Peisheng Zhuo , Zitong He , Hongwei Lin
{"title":"Persistent geometry-topology descriptor for porous structure retrieval based on Heat Kernel Signature","authors":"Peisheng Zhuo ,&nbsp;Zitong He ,&nbsp;Hongwei Lin","doi":"10.1016/j.gmod.2024.101219","DOIUrl":"10.1016/j.gmod.2024.101219","url":null,"abstract":"&lt;div&gt;&lt;p&gt;Porous structures are essential in a variety of fields such as materials science and chemistry. To retrieve porous materials efficiently, novel descriptors are required to quantify the geometric and topological features. In this paper, we present a novel framework to create a descriptor that incorporates both topological and geometric information of a porous structure. To capture geometric information, we keep track of the &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;h&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;m&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; and &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;d&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;a&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;h&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;m&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; of the &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;a&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;u&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt;s of a real-valued function on the surface that evolves with a parameter. Then, we generate the corresponding &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;a&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;u&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;d&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;a&lt;/mi&gt;&lt;mi&gt;g&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;a&lt;/mi&gt;&lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; (&lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;D&lt;/mi&gt;&lt;mi&gt;g&lt;/mi&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;P&lt;/mi&gt;&lt;mi&gt;F&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt;) and convert it into a vector called &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;mi&gt;c&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;f&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;a&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;u&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;d&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;c&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;o&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; (PFD). To extract topological information, we sample points from the pore surface and compute the corresponding persistence diagram, which is then transformed into the Persistence B-Spline Grids (PBSG). Our proposed descriptor, namely &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;n&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;g&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;o&lt;/mi&gt;&lt;mi&gt;m&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;y&lt;/mi&gt;&lt;mo&gt;−&lt;/mo&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;o&lt;/mi&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;mi&gt;o&lt;/mi&gt;&lt;mi&gt;l&lt;/mi&gt;&lt;mi&gt;o&lt;/mi&gt;&lt;mi&gt;g&lt;/mi&gt;&lt;mi&gt;y&lt;/mi&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mspace&gt;&lt;/mspace&gt;&lt;mi&gt;d&lt;/mi&gt;&lt;mi&gt;e&lt;/mi&gt;&lt;mi&gt;s&lt;/mi&gt;&lt;mi&gt;c&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;mi&gt;i&lt;/mi&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;mi&gt;t&lt;/mi&gt;&lt;mi&gt;o&lt;/mi&gt;&lt;mi&gt;r&lt;/mi&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; (PGTD), is obtained by concatenating PFD with PBSG. In our experiments, we use the heat kernel signature (HKS) as the real-valued function to compute the descriptor. We test the method on a synthetic porous dataset and a zeolite dataset and find that it is competitive compa","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101219"},"PeriodicalIF":1.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000079/pdfft?md5=499cdacea6ff6d72e1f6c905040f66c2&pid=1-s2.0-S1524070324000079-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141232284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An exact algorithm for two-dimensional cutting problems based on multi-level pattern 基于多级模式的二维切割问题精确算法
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2024-05-25 DOI: 10.1016/j.gmod.2024.101220
Weiping Pan
{"title":"An exact algorithm for two-dimensional cutting problems based on multi-level pattern","authors":"Weiping Pan","doi":"10.1016/j.gmod.2024.101220","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101220","url":null,"abstract":"<div><p>A multi-level pattern is proposed for the unconstrained two-dimensional cutting problems of rectangular items, and an exact generation algorithm is constructed. The arrangement of rectangular items with the same type in multiple rows and columns is referred to as a 0-level pattern. An <em>n</em>-level pattern is the horizontal or vertical combination of an <em>n</em>-1 level pattern with a pattern whose level will not exceed <em>n</em>-1. The generation algorithm of multi-level pattern is constructed on the base of dynamic programming, and the multi-level patterns with various sizes are generated with increased level order. The normal size is chosen to reduce unnecessary computation in the algorithm. Three sets of benchmark instances and one set of random production instance from the literatures are used for comparison. Comparing to the exact algorithm in the literature, the results in this paper are equivalent, but the computation time is shorter. Comparing to heuristic algorithms in literatures, the results in this paper are better and the computation time is still good enough for practical applications.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101220"},"PeriodicalIF":1.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000080/pdfft?md5=7ba46c24bfd0defb95fae7879ef5f757&pid=1-s2.0-S1524070324000080-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141095105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rod-Bonded Discrete Element Method 杆结合离散元素法
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2024-04-11 DOI: 10.1016/j.gmod.2024.101218
Kangrui Zhang , Han Yan , Jia-Ming Lu , Bo Ren
{"title":"Rod-Bonded Discrete Element Method","authors":"Kangrui Zhang ,&nbsp;Han Yan ,&nbsp;Jia-Ming Lu ,&nbsp;Bo Ren","doi":"10.1016/j.gmod.2024.101218","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101218","url":null,"abstract":"<div><p>The Bonded Discrete Element Method (BDEM) has raised interests in the graphics community in recent years because of its good performance in fracture simulations. However, current explicit BDEM usually needs to work under very small time steps to avoid numerical instability. We propose a new BDEM, namely Rod-BDEM (RBDEM), which uses Cosserat energy and yields integrable forces and torques. We further derive a novel Cosserat rod discretization method to effectively represent the three-dimensional topological connections between discrete elements. Then, a complete implicit BDEM system integrating the appropriate fracture model and contact model is constructed using the implicit Euler integration scheme. Our method allows high Young’s modulus and larger time steps in elastic deformation, breaking, cracking, and impacting, achieving up to 8 times speed up of the total simulation.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101218"},"PeriodicalIF":1.7,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000067/pdfft?md5=8b3b530fbeef40a4b36880d3c7a36d0c&pid=1-s2.0-S1524070324000067-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140542596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DINA: Deformable INteraction Analogy DINA:可变形交互类比
IF 1.7 4区 计算机科学
Graphical Models Pub Date : 2024-03-20 DOI: 10.1016/j.gmod.2024.101217
Zeyu Huang , Sisi Dai , Kai Xu , Hao Zhang , Hui Huang , Ruizhen Hu
{"title":"DINA: Deformable INteraction Analogy","authors":"Zeyu Huang ,&nbsp;Sisi Dai ,&nbsp;Kai Xu ,&nbsp;Hao Zhang ,&nbsp;Hui Huang ,&nbsp;Ruizhen Hu","doi":"10.1016/j.gmod.2024.101217","DOIUrl":"https://doi.org/10.1016/j.gmod.2024.101217","url":null,"abstract":"<div><p>We introduce deformable interaction analogy (DINA) as a means to generate close interactions between two 3D objects. Given a single demo interaction between an anchor object (e.g. a hand) and a source object (e.g. a mug grasped by the hand), our goal is to generate many analogous 3D interactions between the same anchor object and various new target objects (e.g. a toy airplane), where the anchor object is allowed to be rigid or deformable. To this end, we optimize the pose or shape of the anchor object to adapt it to a new target object to mimic the demo. To facilitate the optimization, we advocate using interaction interface (ITF), defined by a set of points sampled on the anchor object, as a descriptive and robust interaction representation that is amenable to non-rigid deformation. We model similarity between interactions using ITF, while for interaction analogy, we transform the ITF, either rigidly or non-rigidly, to guide the feature matching to the reposing and deformation of the anchor object. Qualitative and quantitative experiments show that our ITF-guided deformable interaction analogy works surprisingly well even with simple distance features compared to variants of state-of-the-art methods that utilize more sophisticated interaction representations and feature learning from large datasets.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"133 ","pages":"Article 101217"},"PeriodicalIF":1.7,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000055/pdfft?md5=94513ad40e92d864add74b24ff48c5e2&pid=1-s2.0-S1524070324000055-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信