Graphics and Visual Computing最新文献

筛选
英文 中文
A point selection strategy with edge and line detection for Direct Sparse Visual Odometry 基于边缘和直线检测的直接稀疏视觉里程测量点选择策略
Graphics and Visual Computing Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2022.200051
Yinming Miao, Masahiro Yamaguchi
{"title":"A point selection strategy with edge and line detection for Direct Sparse Visual Odometry","authors":"Yinming Miao,&nbsp;Masahiro Yamaguchi","doi":"10.1016/j.gvc.2022.200051","DOIUrl":"https://doi.org/10.1016/j.gvc.2022.200051","url":null,"abstract":"<div><p>In most feature-based Visual Simultaneous Localization and Mapping systems, the pixels in a current image are compared with the correlative pixels in previous images, and the difference in the coordinates of pixels shows the movement of the camera. Different from the feature-based systems, direct methods operate on image intensity directly. Every pixel on the image or selected pixels with sufficient intensity gradient can be utilized. However, the noises in the images may affect the performance of those algorithms as the pixels are not adequately selected. In this work, we propose a new pixel selection method for a direct visual odometry system that focuses on the edge pixels. The edge pixels are usually more stable and repeatable than normal pixels. We apply the traditional edge detection method with adaptive parameters to get rough edge results. Then the edges are separated by gradient and shape. We use straightness, smoothness, length, and gradient magnitude to select the meaningful edges. We replace the pixel selection step of Direct Sparse Odometry and Direct Sparse Odometry with Loop Closure to present the evaluation on open datasets. The experimental results indicate that our method improves the performance of existing direct visual odometry systems in man-made scenes but is not suitable for pure natural scenes.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200051"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000055/pdfft?md5=0d8257efc50aff81595f28fd075adf81&pid=1-s2.0-S2666629422000055-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91637012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Overcoming challenges when teaching hands-on courses about Virtual Reality and Augmented Reality: Methods, techniques and best practice 克服挑战时,教学动手课程关于虚拟现实和增强现实:方法,技术和最佳实践
Graphics and Visual Computing Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2021.200037
Ralf Doerner, Robin Horst
{"title":"Overcoming challenges when teaching hands-on courses about Virtual Reality and Augmented Reality: Methods, techniques and best practice","authors":"Ralf Doerner,&nbsp;Robin Horst","doi":"10.1016/j.gvc.2021.200037","DOIUrl":"https://doi.org/10.1016/j.gvc.2021.200037","url":null,"abstract":"<div><p>This paper presents methods and techniques for teaching Virtual Reality (VR) and Augmented Reality (AR) that were conceived and refined during more than 20 years of our teaching experience on these subjects in higher education. We cover a broad spectrum from acquainting learners with VR and AR as only one aspect of a more general course to an in-depth course on VR and AR during a whole semester. The focus of the paper is methods and techniques that allow learners to not only learn about VR and AR on a theoretical level but that facilitate their own VR and AR experiences with all senses and foster hands-on learning. We show why this is challenging (e.g., the high workload involved with the preparation of hands-on experiences, the large amount of course time that needs to be devoted), and how these challenges can be met (e.g., using our Circuit Parcours Technique). Moreover, we discuss learning goals that can be addressed in VR and AR courses besides hands-on experiences when using our methods and techniques. Finally, we provide best practice examples that can be used as blueprints for parts of a VR and AR course.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200037"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629421000188/pdfft?md5=1474989127de9e2ea5e31bd6ea40fb7d&pid=1-s2.0-S2666629421000188-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91637007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
GRSI Best Paper Award GRSI最佳论文奖
Graphics and Visual Computing Pub Date : 2021-12-01 DOI: 10.1016/S2666-6294(21)00020-6
Mashhuda Glencross, Daniele Panozzo, Joaquim Jorge
{"title":"GRSI Best Paper Award","authors":"Mashhuda Glencross,&nbsp;Daniele Panozzo,&nbsp;Joaquim Jorge","doi":"10.1016/S2666-6294(21)00020-6","DOIUrl":"https://doi.org/10.1016/S2666-6294(21)00020-6","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"5 ","pages":"Article 200039"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629421000206/pdfft?md5=53b9bbef9c970d601269ecb1e388cc91&pid=1-s2.0-S2666629421000206-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust marker-based projector–camera synchronization 稳健的基于标记的投影仪-摄像机同步
Graphics and Visual Computing Pub Date : 2021-12-01 DOI: 10.1016/j.gvc.2021.200034
Vanessa Klein , Martin Edel , Marc Stamminger , Frank Bauer
{"title":"Robust marker-based projector–camera synchronization","authors":"Vanessa Klein ,&nbsp;Martin Edel ,&nbsp;Marc Stamminger ,&nbsp;Frank Bauer","doi":"10.1016/j.gvc.2021.200034","DOIUrl":"https://doi.org/10.1016/j.gvc.2021.200034","url":null,"abstract":"<div><p>Recording clean pictures of projected images requires the projector and camera to be synchronized. This task usually requires additional hardware or imposes major restrictions on the devices with software-based approaches, e.g., a specific frame rate of the camera. We present a novel software-based synchronization technique that supports projectors and cameras with different frame rates and at the same time tolerates camera frame drops. We focus on the special needs of LCD projectors and the effect of their liquid crystal response time on the projected image. By relying on visible marker detection we entirely refrain from taking time measurements, allowing for a robust and fast synchronization.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"5 ","pages":"Article 200034"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629421000164/pdfft?md5=ce56d0eb2f928aded252494f9dd10eda&pid=1-s2.0-S2666629421000164-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive evaluation of deep models and optimizers for Indian sign language recognition 印度手语识别的深度模型和优化器的综合评价
Graphics and Visual Computing Pub Date : 2021-12-01 DOI: 10.1016/j.gvc.2021.200032
Prachi Sharma, Radhey Shyam Anand
{"title":"A comprehensive evaluation of deep models and optimizers for Indian sign language recognition","authors":"Prachi Sharma,&nbsp;Radhey Shyam Anand","doi":"10.1016/j.gvc.2021.200032","DOIUrl":"10.1016/j.gvc.2021.200032","url":null,"abstract":"<div><p>Deep Learning has become popular among researchers for a long time, and still, new deep convolution neural networks come into the picture very frequently. However, it is challenging to select the best amongst such networks due to their dependence on the tuning of optimization hyperparameters, which is a trivial task. This situation motivates the current study, in which we perform a systematic evaluation and statistical analysis of pre-trained deep models. It is the first comprehensive analysis of pre-trained deep models, gradient-based optimizers and optimization hyperparameters for static Indian sign language recognition. A three-layered CNN model is also proposed and trained from scratch, which attained the best recognition accuracy of 99.0% and 97.6% on numerals and alphabets of a public ISL dataset. Among pre-trained models, ResNet152V2 performed better than other models with a recognition accuracy of 96.2% on numerals and 90.8% on alphabets of the ISL dataset. Our results reinforce the hypothesis for pre-trained deep models that, in general, a pre-trained deep network adequately tuned can yield results way more than the state-of-the-art machine learning techniques without having to train the whole model but only a few top layers for ISL recognition. The effect of hyperparameters like learning rate, batch size and momentum is also analyzed and presented in the paper.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"5 ","pages":"Article 200032"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123576994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Erratum to “Foreword to the Special Section on CAD/Graphics 2021” [Graph. Vis. Comput. 4 (2021) 200027] “CAD/Graphics 2021特别部分前言”的勘误表[图]。vi .计算,4 (2021)200027]
Graphics and Visual Computing Pub Date : 2021-12-01 DOI: 10.1016/j.gvc.2021.200031
Juyong Zhang, Rui Wang, Giuseppe Patanè
{"title":"Erratum to “Foreword to the Special Section on CAD/Graphics 2021” [Graph. Vis. Comput. 4 (2021) 200027]","authors":"Juyong Zhang,&nbsp;Rui Wang,&nbsp;Giuseppe Patanè","doi":"10.1016/j.gvc.2021.200031","DOIUrl":"10.1016/j.gvc.2021.200031","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"5 ","pages":"Article 200031"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121164749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric procedural models for shape representation 用于形状表示的体积过程模型
Graphics and Visual Computing Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200018
Andrew R. Willis , Prashant Ganesh , Kyle Volle , Jincheng Zhang , Kevin Brink
{"title":"Volumetric procedural models for shape representation","authors":"Andrew R. Willis ,&nbsp;Prashant Ganesh ,&nbsp;Kyle Volle ,&nbsp;Jincheng Zhang ,&nbsp;Kevin Brink","doi":"10.1016/j.gvc.2021.200018","DOIUrl":"https://doi.org/10.1016/j.gvc.2021.200018","url":null,"abstract":"<div><p>This article describes a volumetric approach for procedural shape modeling and a new Procedural Shape Modeling Language (PSML) that facilitates the specification of these models. PSML provides programmers the ability to describe shapes in terms of their 3D elements where each element may be a semantic group of 3D objects, e.g., a brick wall, or an indivisible object, e.g., an individual brick. Modeling shapes in this manner facilitates the creation of models that more closely approximate the organization and structure of their real-world counterparts. As such, users may query these models for volumetric information such as the number, position, orientation and volume of 3D elements which cannot be provided using surface based model-building techniques. PSML also provides a number of new language-specific capabilities that allow for a rich variety of context-sensitive behaviors and post-processing functions. These capabilities include an object-oriented approach for model design, methods for querying the model for component-based information and the ability to access model elements and components to perform Boolean operations on the model parts. PSML is open-source and includes freely available tutorial videos, demonstration code and an integrated development environment to support writing PSML programs.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200018"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72283202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Computer Graphics teaching challenges: Guidelines for balancing depth, complexity and mentoring in a confinement context 计算机图形学教学的挑战:在限制环境中平衡深度、复杂性和指导的指导方针
Graphics and Visual Computing Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200021
Rui Rodrigues , Teresa Matos , Alexandre Valle de Carvalho , Jorge G. Barbosa , Rodrigo Assaf , Rui Nóbrega , António Coelho , A. Augusto de Sousa
{"title":"Computer Graphics teaching challenges: Guidelines for balancing depth, complexity and mentoring in a confinement context","authors":"Rui Rodrigues ,&nbsp;Teresa Matos ,&nbsp;Alexandre Valle de Carvalho ,&nbsp;Jorge G. Barbosa ,&nbsp;Rodrigo Assaf ,&nbsp;Rui Nóbrega ,&nbsp;António Coelho ,&nbsp;A. Augusto de Sousa","doi":"10.1016/j.gvc.2021.200021","DOIUrl":"10.1016/j.gvc.2021.200021","url":null,"abstract":"<div><p>We discuss challenges, methodologies, and approaches for teaching Computer Graphics (CG) courses in a confinement context, together with assessing the experience and the proposal of guidelines. Our approach balances CG topics’ depth with creating relevant and attractive content for a CG course while coping with communication, support, and assessment issues. These are especially important in a pandemic context where online classes may reduce students’ engagement and hinder communication with educators. We refined the model used over the last years based on a two-stage approach (first tutorial, then project-based) relying on an in-house WebGL-based educational library – WebCGF – that simplifies onboarding while keeping connections to the underlying concepts and technologies. The confinement constraints led to complement that model with additional collaborative tools and mentoring strategies. Those included, apart from the standard synchronous remote classes, the use of a group communication tool for structured community engagement and video presentation, and a Git-based code management system specifically configured for classes and groups, which allowed following more closely the development process of each student. Results show that the performance and students’ engagement achieved was similar to that of recent years, which led us to a set of guidelines to consider in these contexts.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200021"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124042502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Single trunk multi-scale network for micro-expression recognition 微表情识别的单主干多尺度网络
Graphics and Visual Computing Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200026
Jie Wang , Xiao Pan , Xinyu Li , Guangshun Wei , Yuanfeng Zhou
{"title":"Single trunk multi-scale network for micro-expression recognition","authors":"Jie Wang ,&nbsp;Xiao Pan ,&nbsp;Xinyu Li ,&nbsp;Guangshun Wei ,&nbsp;Yuanfeng Zhou","doi":"10.1016/j.gvc.2021.200026","DOIUrl":"10.1016/j.gvc.2021.200026","url":null,"abstract":"<div><p>Micro-expressions are the external manifestations of human psychological activities. Therefore, micro-expression recognition has important research and application value in many fields such as public services, criminal investigations, and clinical diagnosis. However, the particular characteristics (e.g., short duration and subtle changes) of micro-expressions bring great challenges to micro-expression recognition. In this paper, we explore the differences in the direction of facial muscle movement when people make different expressions to recognize micro-expressions. We first use optical flow to capture the subtle changes in the facial movement when a micro-expression occurs. Next, we extract facial movement information to an aniso-weighted optical flow image based on anisotropically weighting the horizontal and vertical components of the optical flow. Finally, we feed the aniso-weighted optical flow image into the proposed Single Trunk Multi-scale Network for micro-expression recognition. In particular, the designed multi-scale feature catcher in the network can capture features of micro-expressions with different intensities. We conduct extensive experiments on four spontaneous micro-expression datasets, and the experiment results show that our proposed method is competitive and effective.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200026"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129297292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Foreword to the special section on Computer Graphics education in the time of Covid 新冠肺炎时代的计算机图形学教育专题前言
Graphics and Visual Computing Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200028
Beatriz Sousa Santos, Gitta Domik, Eike Anderson
{"title":"Foreword to the special section on Computer Graphics education in the time of Covid","authors":"Beatriz Sousa Santos,&nbsp;Gitta Domik,&nbsp;Eike Anderson","doi":"10.1016/j.gvc.2021.200028","DOIUrl":"10.1016/j.gvc.2021.200028","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200028"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123924162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信